[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-15 Thread Strahil Nikolov
 Ok,
I have managed to recover again and no issues are detected this time.I guess 
this case is quite rare and nobody has experienced that.
Best Regards,Strahil Nikolov

В сряда, 13 март 2019 г., 13:03:38 ч. Гринуич+2, Strahil Nikolov 
 написа:  
 
  Dear Simone,
it seems that there is some kind of problem ,as the OVF got updated with wrong 
configuration:[root@ovirt2 ~]# ls -l 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/{441abdc8-6cb1-49a4-903f-a1ec0ed88429,94ade632-6ecc-4901-8cec-8e39f3d69cb0}
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/441abdc8-6cb1-49a4-903f-a1ec0ed88429:
total 66591
-rw-rw. 1 vdsm kvm   30720 Mar 12 08:06 c3309fc0-8707-4de1-903d-8d4bbb024f81
-rw-rw. 1 vdsm kvm 1048576 Jan 31 13:24 
c3309fc0-8707-4de1-903d-8d4bbb024f81.lease
-rw-r--r--. 1 vdsm kvm 435 Mar 12 08:06 
c3309fc0-8707-4de1-903d-8d4bbb024f81.meta

/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0:
total 66591
-rw-rw. 1 vdsm kvm   30720 Mar 13 11:07 9460fc4b-54f3-48e3-b7b6-da962321ecf4
-rw-rw. 1 vdsm kvm 1048576 Jan 31 13:24 
9460fc4b-54f3-48e3-b7b6-da962321ecf4.lease
-rw-r--r--. 1 vdsm kvm 435 Mar 13 11:07 
9460fc4b-54f3-48e3-b7b6-da962321ecf4.meta

Starting the hosted-engine fails with:
2019-03-13 12:48:21,237+0200 ERROR (vm/8474ae07) [virt.vm] 
(vmId='8474ae07-f172-4a20-b516-375c73903df7') The vm start process failed 
(vm:937)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 866, in 
_startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2852, in _run
    dom = self._connection.defineXML(self._domain.xml)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", 
line 131, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in 
wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3743, in defineXML
    if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)
libvirtError: XML error: No PCI buses available

Best Regards,Strahil Nikolov


В вторник, 12 март 2019 г., 14:14:26 ч. Гринуич+2, Strahil Nikolov 
 написа:  
 
  Dear Simone,
it should be 60 min , but I have checked several hours after that and it didn't 
update it.
[root@engine ~]# engine-config -g OvfUpdateIntervalInMinutes
OvfUpdateIntervalInMinutes: 60 version: general

How can i make a backup of the VM config , as you have noticed the local copy 
in /var/run/ovirt-hosted-engine-ha/vm.conf won't work ?
I will keep the HostedEngine's xml - so I can redefine if needed.
Best Regards,Strahil Nikolov
  
  
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZRPIBZKOD533HODP6VER726XWGQEZXM7/


[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-15 Thread Simone Tiraboschi
On Fri, Mar 15, 2019 at 8:12 AM Strahil Nikolov 
wrote:

> Ok,
>
> I have managed to recover again and no issues are detected this time.
> I guess this case is quite rare and nobody has experienced that.
>

Hi,
can you please explain how you fixed it?


>
> Best Regards,
> Strahil Nikolov
>
> В сряда, 13 март 2019 г., 13:03:38 ч. Гринуич+2, Strahil Nikolov <
> hunter86...@yahoo.com> написа:
>
>
> Dear Simone,
>
> it seems that there is some kind of problem ,as the OVF got updated with
> wrong configuration:
> [root@ovirt2 ~]# ls -l
> /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/{441abdc8-6cb1-49a4-903f-a1ec0ed88429,94ade632-6ecc-4901-8cec-8e39f3d69cb0}
>
> /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/441abdc8-6cb1-49a4-903f-a1ec0ed88429:
> total 66591
> -rw-rw. 1 vdsm kvm   30720 Mar 12 08:06
> c3309fc0-8707-4de1-903d-8d4bbb024f81
> -rw-rw. 1 vdsm kvm 1048576 Jan 31 13:24
> c3309fc0-8707-4de1-903d-8d4bbb024f81.lease
> -rw-r--r--. 1 vdsm kvm 435 Mar 12 08:06
> c3309fc0-8707-4de1-903d-8d4bbb024f81.meta
>
>
> /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0:
> total 66591
> -rw-rw. 1 vdsm kvm   30720 Mar 13 11:07
> 9460fc4b-54f3-48e3-b7b6-da962321ecf4
> -rw-rw. 1 vdsm kvm 1048576 Jan 31 13:24
> 9460fc4b-54f3-48e3-b7b6-da962321ecf4.lease
> -rw-r--r--. 1 vdsm kvm 435 Mar 13 11:07
> 9460fc4b-54f3-48e3-b7b6-da962321ecf4.meta
>
> Starting the hosted-engine fails with:
>
> 2019-03-13 12:48:21,237+0200 ERROR (vm/8474ae07) [virt.vm]
> (vmId='8474ae07-f172-4a20-b516-375c73903df7') The vm start process failed
> (vm:937)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 866, in
> _startUnderlyingVm
> self._run()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2852, in
> _run
> dom = self._connection.defineXML(self._domain.xml)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line
> 131, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line
> 94, in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3743, in
> defineXML
> if ret is None:raise libvirtError('virDomainDefineXML() failed',
> conn=self)
> libvirtError: XML error: No PCI buses available
>
> Best Regards,
> Strahil Nikolov
>
>
> В вторник, 12 март 2019 г., 14:14:26 ч. Гринуич+2, Strahil Nikolov <
> hunter86...@yahoo.com> написа:
>
>
> Dear Simone,
>
> it should be 60 min , but I have checked several hours after that and it
> didn't update it.
>
> [root@engine ~]# engine-config -g OvfUpdateIntervalInMinutes
> OvfUpdateIntervalInMinutes: 60 version: general
>
> How can i make a backup of the VM config , as you have noticed the local
> copy in /var/run/ovirt-hosted-engine-ha/vm.conf won't work ?
>
> I will keep the HostedEngine's xml - so I can redefine if needed.
>
> Best Regards,
> Strahil Nikolov
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NYOEDUYIIV3TYU6HWFHFNKHA45ZV2WFD/


[ovirt-users] hosted-engine --deploy fails on iSCSI while trying to connect to retrieved ipv6 address, even while forcing ipv4 with --4

2019-03-15 Thread Guillaume Pavese
I try to deploy hosted-engine 4.3.2-rc2 on iSCSI
I put a ipv4 portal address and targets get discovered. However they are
are returned by the Synology hosts with both ipv4 and ipv6 adresses.
 LUN discovery then fails while attempting to connect to ipv6 address
I tried hosted-engine --deploy --4 to force ipv4 but that fails too.


  Please specify the storage you would like: iscsi

  Please specify the iSCSI portal IP address: 10.199.9.16
  Please specify the iSCSI portal port [3260]:
  Please specify the iSCSI discover user:
  Please specify the iSCSI discover password:
  Please specify the iSCSI portal login user:
  Please specify the iSCSI portal login password:
[ INFO  ] Discovering iSCSI targets
[ INFO  ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of
steps]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Obtain SSO token using
username/password credentials]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Prepare iSCSI parameters]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Fetch host facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : iSCSI discover with REST API]
[ INFO  ] ok: [localhost]
  The following targets have been found:
  [1] iqn.2000-01.com.synology:SVC-STO-FR-301.Target-1.2dfed4a32a
  TPGT: 1, portals:
  10.199.9.16:3260
  fe80::211:32ff:fe6d:6ddb:3260

  [2] iqn.2000-01.com.synology:SVC-STO-FR-301.Target-2.2dfed4a32a
  TPGT: 1, portals:
  10.199.9.16:3260
  fe80::211:32ff:fe6d:6ddb:3260


in Host's logs :

 mars 15 09:32:25 vs-inf-int-kvm-fr-304-210.hostics.fr iscsid[5983]:conn 0
login rejected: initiator error (02/00)
mars 15 09:32:25 vs-inf-int-kvm-fr-304-210.hostics.fr iscsid[5983]:
Connection1:0 to [target:
iqn.2000-01.com.synology:SVC-STO-FR-301.Target-2.2dfed4a32a, portal:
10.199.9.16,3260] through [iface: default] is shutdown.
mars 15 09:32:27 vs-inf-int-kvm-fr-304-210.hostics.fr iscsid[5983]: cannot
make a connection to fe80::211:32ff:fe6d:6ddb:3260 (-1,22)
...
mars 15 09:33:21 vs-inf-int-kvm-fr-304-210.hostics.fr iscsid[5983]: cannot
make a connection to fe80::211:32ff:fe6d:6ddb:3260 (-1,22)
mars 15 09:33:24 vs-inf-int-kvm-fr-304-210.hostics.fr iscsid[5983]: cannot
make a connection to fe80::211:32ff:fe6d:6ddb:3260 (-1,22)
mars 15 09:33:27 vs-inf-int-kvm-fr-304-210.hostics.fr vdsm[26174]: WARN
Worker blocked:  timeout=60,
duration=60.00 at 0x7fcb904ef410> task#=10 at 0x7fcb905977d0>, traceback:
   File:
"/usr/lib64/python2.7/threading.py", line 785, in __bootstrap

 self.__bootstrap_inner()
   File:
"/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner

 self.run()
   File:
"/usr/lib64/python2.7/threading.py", line 765, in run

 self.__target(*self.__args, **self.__kwargs)
   File:
"/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in
run
 ret =
func(*args, **kwargs)
   File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run

 self._execute_task()
   File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in
_execute_task
 task()
   File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__

 self._callable()
   File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in
__call__

 self._handler(self._ctx, self._req)
   File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in
_serveRequest

 response = self._handle_request(req, ctx)
   File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in
_handle_request
 res =
method(**params)
   File:
"/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in
_dynamicMethod
 result
= fn(*methodArgs)

[ovirt-users] Re: Migrating VMs from ovirt-4.1.1 to ovirt-4.2.8

2019-03-15 Thread Michal Skrivanek


> On 14 Mar 2019, at 18:24, Wood Peter  wrote:
> 
> Hi,
> 
> I need to migrate a few dozen VMs from ovirt-4.1.1 to ovirt-4.2.8.
> 
> I did a few following this procedure:
> VMs export -> Detach Export domain -> Attach Export to new ovirt -> Import VMs

you can mark multiple VMs like that, all of them if you want. like other 
imports from the same dialog. What difference do you see?

> 
> Importing from VMware is one step only process. All I have to do is select 
> VMware for Source in the Import page and then import the VMs I need.
> 
> I feel like I'm missing something.
> 
> Is there a way to import VMs from another ovirt deployment similar to import 
> from VMware i.e. without going through Export storage domain?

OVA (which is rather few by few in UI but can be easily scripted with REST 
API), or oVirt disaster recovery ansible role

Thanks,
michal

> 
> Thank you,
> -- Peter
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VEQHNAVZNQAPLSR5WP5W7M6EE5KENXML/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JEHX5RNAMRQPZ6HE4K5JAAXAHZLMEKKT/


[ovirt-users] oVirt 4.3.1 - Remove VM greyed out

2019-03-15 Thread Strahil Nikolov
Hi Community,
I have the following problem.A VM was created based on template and after 
poweroff/shutdown it cannot be removed - the button is greyed-out.
Anyone who got such an issue ?Any hint where to look for ?
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMLA37UBITKQT5VZYFL3L6P4PXKB7UGE/


[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-15 Thread Strahil Nikolov

On Fri, Mar 15, 2019 at 8:12 AM Strahil Nikolov  wrote:

 Ok,
I have managed to recover again and no issues are detected this time.I guess 
this case is quite rare and nobody has experienced that.

>Hi,>can you please explain how you fixed it?
I have set again to global maintenance, defined the HostedEngine from the old 
xml (taken from old vdsm log) , defined the network and powered it off.Set the 
OVF update period to 5 min , but it took several hours until the OVF_STORE were 
updated. Once this happened I restarted the ovirt-ha-agent ovirt-ha-broker on 
both nodes.Then I powered off the HostedEngine and undefined it from ovirt1.

then I set the maintenance to 'none' and the VM powered on ovirt1.
In order to test a failure, I removed the global maintenance and powered off 
the HostedEngine from itself (via ssh). It was brought back to the other node.
In order to test failure of ovirt2, I set ovirt1 in local maintenance and 
removed it (mode 'none') and again shutdown the VM via ssh and it started again 
to ovirt1.
It seems to be working, as I have later shut down the Engine several times and 
it managed to start without issues. 

I'm not sure this is related, but I had detected that ovirt2 was out-of-sync of 
the vdsm-ovirtmgmt network , but it got fixed easily via the UI.



Best Regards,Strahil Nikolov
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3B7OQUA733ETUA66TB7HF5Y24BLSI4XO/


[ovirt-users] Re: oVirt 4.3.1 - Remove VM greyed out

2019-03-15 Thread Strahil Nikolov
 Please ignore this one - I'm just too stupid and i didn't realize that the 
Deletion Protection was enabled.
Strahil

В петък, 15 март 2019 г., 11:27:08 ч. Гринуич+2, Strahil Nikolov 
 написа:  
 
 Hi Community,
I have the following problem.A VM was created based on template and after 
poweroff/shutdown it cannot be removed - the button is greyed-out.
Anyone who got such an issue ?Any hint where to look for ?
Best Regards,Strahil Nikolov
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7V6YQQQAKXGUSKCRTF2KKQAYCTAPTYKT/


[ovirt-users] Re: ovn-provider-network

2019-03-15 Thread Miguel Duarte de Mora Barroso
On Thu, Mar 14, 2019 at 3:04 PM Staniforth, Paul
 wrote:
>
> Thanks Miguel,
>  if we configure it connect to a physical network and 
> select the Data Centre Network  I assume it will create the overlay network 
> on top of that logical network.

Let me clarify; the network on top of which it sets up the overlay is
defined when the host is added, and is *only* used for inter-host
communication. When within the same host, it simply uses the OVS
bridge.

What (I think) you mean uses the localnet feature of OVN, where the
packets leaving the OVS bridge are forwarded to the external logical
network you configure.

These 2 concepts are unrelated.


> Also is there any documentation about the ovn-provider-network architecture.
>
> Regards,
> Paul S.
> 
> From: Miguel Duarte de Mora Barroso 
> Sent: 14 March 2019 13:15
> To: Staniforth, Paul
> Cc: users@ovirt.org
> Subject: Re: [ovirt-users] ovn-provider-network
>
> On Wed, Mar 13, 2019 at 10:08 PM Staniforth, Paul
>  wrote:
> >
> > Hello,
> >
> >   we are using oVirt-4.2.8 and I have created a logical network 
> > using the ovn-network-provider, I haven't configured it to connect to a 
> > physical network.
> >
> >
> > I have 2 VMs running on 2 hosts which can connect to each other this 
> > logical network. The only connection between the hosts is over the 
> > ovirtmgmt network so presumably the traffic is using this?
>
> Yes, OVN sets up an overlay network on top of ovirtmgmt network.
>
> >
> >
> > Thanks,
> >
> >Paul S.
> >
> > To view the terms under which this email is distributed, please go to:-
> > http://leedsbeckett.ac.uk/disclaimer/email/
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: 
> > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fsite%2Fprivacy-policy%2F&data=02%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7C5336751f2bac4792de0c08d6a87f22f5%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636881661346520630&sdata=AY7YVtWAxfRugtfVf4qpkhSRsAisdGVgBrDLKDmZKyM%3D&reserved=0
> > oVirt Code of Conduct: 
> > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2F&data=02%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7C5336751f2bac4792de0c08d6a87f22f5%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636881661346520630&sdata=LON7wN1k4d2OOEwEqb7xxkc2w4NwlWjRK5AjYLmLSu0%3D&reserved=0
> > List Archives: 
> > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%2FB22LIMO6RI4SBYAOVDRWPQX3UUUYTUGL%2F&data=02%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7C5336751f2bac4792de0c08d6a87f22f5%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636881661346520630&sdata=2yPcOGZFuJM05c9%2Bhw73OS14NRJ6wRAG9RfbdJTaRGw%3D&reserved=0
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NPVZGHQHJQE2YIBD6ID5KM7RPJ36M55R/


[ovirt-users] Are people still experiencing issues with GlusterFS on 4.3x?

2019-03-15 Thread Jayme
I along with others had GlusterFS issues after 4.3 upgrades, the failed to
dispatch handler issue with bricks going down intermittently.  After some
time it seemed to have corrected itself (at least in my enviornment) and I
hadn't had any brick problems in a while.  I upgraded my three node HCI
cluster to 4.3.1 yesterday and again I'm running in to brick issues.  They
will all be up running fine then all of a sudden a brick will randomly drop
and I have to force start the volume to get it back up.

Have any of these Gluster issues been addressed in 4.3.2 or any other
releases/patches that may be available to help the problem at this time?

Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/746CU33TP223CFYS6BFUA2C4FIYZQMGU/


[ovirt-users] Re: Are people still experiencing issues with GlusterFS on 4.3x?

2019-03-15 Thread Strahil Nikolov
 
>I along with others had GlusterFS issues after 4.3 upgrades, the failed to 
>dispatch handler issue with bricks going down intermittently.  After some time 
>it seemed to have corrected itself (at least in my enviornment) and I >hadn't 
>had any brick problems in a while.  I upgraded my three node HCI cluster to 
>4.3.1 yesterday and again I'm running in to brick issues.  They will all be up 
>running fine then all of a sudden a brick will randomly drop >and I have to 
>force start the volume to get it back up. >
>Have any of these Gluster issues been addressed in 4.3.2 or any other 
>releases/patches that may be available to help the problem at this time?>
>Thanks!
Yep,
sometimes a brick dies (usually my ISO domain ) and then I have to "gluster 
volume start isos force".Sadly I had several issues with 4.3.X - problematic 
OVF_STORE (0 bytes), issues with gluster , out-of-sync network - so for me 
4.3.0 & 4.3.0 are quite unstable.
Is there a convention indicating stability ? Is 4.3.xxx means unstable , while 
4.2.yyy means stable ?
Best Regards,Strahil Nikolov
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACQE2DCN2LP3RPIPZNXYSLCBXZ4VOPX2/


[ovirt-users] Re: Are people still experiencing issues with GlusterFS on 4.3x?

2019-03-15 Thread Sandro Bonazzola
Il giorno ven 15 mar 2019 alle ore 13:38 Jayme  ha
scritto:

> I along with others had GlusterFS issues after 4.3 upgrades, the failed to
> dispatch handler issue with bricks going down intermittently.  After some
> time it seemed to have corrected itself (at least in my enviornment) and I
> hadn't had any brick problems in a while.  I upgraded my three node HCI
> cluster to 4.3.1 yesterday and again I'm running in to brick issues.  They
> will all be up running fine then all of a sudden a brick will randomly drop
> and I have to force start the volume to get it back up.
>

Just to clarify, you already where on oVirt 4.3.0 + Glusterfs 5.3-1 and
upgraded to oVirt 4.3.1 + Glusterfs 5.3-2 right?




>
> Have any of these Gluster issues been addressed in 4.3.2 or any other
> releases/patches that may be available to help the problem at this time?
>
> Thanks!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/746CU33TP223CFYS6BFUA2C4FIYZQMGU/
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KM23NHZ7VUUVCTXBFETFB4KDDOHJB6FF/


[ovirt-users] Re: Are people still experiencing issues with GlusterFS on 4.3x?

2019-03-15 Thread Sandro Bonazzola
Il giorno ven 15 mar 2019 alle ore 13:46 Strahil Nikolov <
hunter86...@yahoo.com> ha scritto:

>
> >I along with others had GlusterFS issues after 4.3 upgrades, the failed
> to dispatch handler issue with bricks going down intermittently.  After
> some time it seemed to have corrected itself (at least in my enviornment)
> and I >hadn't had any brick problems in a while.  I upgraded my three node
> HCI cluster to 4.3.1 yesterday and again I'm running in to brick issues.
> They will all be up running fine then all of a sudden a brick will randomly
> drop >and I have to force start the volume to get it back up.
> >
> >Have any of these Gluster issues been addressed in 4.3.2 or any other
> releases/patches that may be available to help the problem at this time?
> >
> >Thanks!
>
> Yep,
>
> sometimes a brick dies (usually my ISO domain ) and then I have to
> "gluster volume start isos force".
> Sadly I had several issues with 4.3.X - problematic OVF_STORE (0 bytes),
> issues with gluster , out-of-sync network - so for me 4.3.0 & 4.3.0 are
> quite unstable.
>
> Is there a convention indicating stability ? Is 4.3.xxx means unstable ,
> while 4.2.yyy means stable ?
>

No, there's no such convention. 4.3 is supposed to be stable and production
ready.
The fact it isn't stable enough for all the cases means it has not been
tested for those cases.
In oVirt 4.3.1 RC cycle testing (
https://trello.com/b/5ZNJgPC3/ovirt-431-test-day-1 ) we got participation
of only 6 people and not even all the tests have been completed.
Help testing during release candidate phase helps having more stable final
releases.
oVirt 4.3.2 is at its second release candidate, if you have time and
resource, it would be helpful testing it on an environment which is similar
to your production environment and give feedback / report bugs.

Thanks



>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACQE2DCN2LP3RPIPZNXYSLCBXZ4VOPX2/
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UPPMAKYNGWB6F4GPZTHOY4QC6GGO66CX/


[ovirt-users] Re: Are people still experiencing issues with GlusterFS on 4.3x?

2019-03-15 Thread Sandro Bonazzola
Il giorno ven 15 mar 2019 alle ore 14:00 Simon Coter 
ha scritto:

> Hi,
>
> something that I’m seeing in the vdsm.log, that I think is gluster related
> is the following message:
>
> 2019-03-15 05:58:28,980-0700 INFO  (jsonrpc/6) [root] managedvolume not
> supported: Managed Volume Not Supported. Missing package os-brick.:
> ('Cannot import os_brick',) (caps:148)
>
> os_brick seems something available by openstack channels but I didn’t
> verify.
>

Fred, I see you introduced above error in vdsm
commit 9646c6dc1b875338b170df2cfa4f41c0db8a6525 back in November 2018.
I guess you are referring to python-os-brick.
Looks like it's related to cinderlib integration.
I would suggest to:
- fix error message pointing to python-os-brick
- add python-os-brick dependency in spec file if the dependency is not
optional
- if the dependency is optional as it seems to be, adjust the error message
to say so. I feel nervous seeing errors on missing packages :-)


>
> Simon
>
> On Mar 15, 2019, at 1:54 PM, Sandro Bonazzola  wrote:
>
>
>
> Il giorno ven 15 mar 2019 alle ore 13:46 Strahil Nikolov <
> hunter86...@yahoo.com> ha scritto:
>
>>
>> >I along with others had GlusterFS issues after 4.3 upgrades, the failed
>> to dispatch handler issue with bricks going down intermittently.  After
>> some time it seemed to have corrected itself (at least in my enviornment)
>> and I >hadn't had any brick problems in a while.  I upgraded my three node
>> HCI cluster to 4.3.1 yesterday and again I'm running in to brick issues.
>> They will all be up running fine then all of a sudden a brick will randomly
>> drop >and I have to force start the volume to get it back up.
>> >
>> >Have any of these Gluster issues been addressed in 4.3.2 or any other
>> releases/patches that may be available to help the problem at this time?
>> >
>> >Thanks!
>>
>> Yep,
>>
>> sometimes a brick dies (usually my ISO domain ) and then I have to
>> "gluster volume start isos force".
>> Sadly I had several issues with 4.3.X - problematic OVF_STORE (0 bytes),
>> issues with gluster , out-of-sync network - so for me 4.3.0 & 4.3.0 are
>> quite unstable.
>>
>> Is there a convention indicating stability ? Is 4.3.xxx means unstable ,
>> while 4.2.yyy means stable ?
>>
>
> No, there's no such convention. 4.3 is supposed to be stable and
> production ready.
> The fact it isn't stable enough for all the cases means it has not been
> tested for those cases.
> In oVirt 4.3.1 RC cycle testing (
> https://trello.com/b/5ZNJgPC3/ovirt-431-test-day-1 ) we got participation
> of only 6 people and not even all the tests have been completed.
> Help testing during release candidate phase helps having more stable final
> releases.
> oVirt 4.3.2 is at its second release candidate, if you have time and
> resource, it would be helpful testing it on an environment which is similar
> to your production environment and give feedback / report bugs.
>
> Thanks
>
>
>
>>
>> Best Regards,
>> Strahil Nikolov
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACQE2DCN2LP3RPIPZNXYSLCBXZ4VOPX2/
>>
>
>
> --
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UPPMAKYNGWB6F4GPZTHOY4QC6GGO66CX/
>
>
>

-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WKBPVI4FY2L5KR2L5VHMZYJBP3H5LUPI/


[ovirt-users] Re: Are people still experiencing issues with GlusterFS on 4.3x?

2019-03-15 Thread Jayme
Yes that is correct.  I don't know if the upgrade to 4.3.1 itself caused
issues or simply related somehow to rebooting all hosts again to apply node
updates started causing brick issues for me again. I started having similar
brick issues after upgrading to 4.3 originally that seemed to have
stabilized, prior to 4.3 I never had a single glusterFS issue or brick
offline on 4.2

On Fri, Mar 15, 2019 at 9:48 AM Sandro Bonazzola 
wrote:

>
>
> Il giorno ven 15 mar 2019 alle ore 13:38 Jayme  ha
> scritto:
>
>> I along with others had GlusterFS issues after 4.3 upgrades, the failed
>> to dispatch handler issue with bricks going down intermittently.  After
>> some time it seemed to have corrected itself (at least in my enviornment)
>> and I hadn't had any brick problems in a while.  I upgraded my three node
>> HCI cluster to 4.3.1 yesterday and again I'm running in to brick issues.
>> They will all be up running fine then all of a sudden a brick will randomly
>> drop and I have to force start the volume to get it back up.
>>
>
> Just to clarify, you already where on oVirt 4.3.0 + Glusterfs 5.3-1 and
> upgraded to oVirt 4.3.1 + Glusterfs 5.3-2 right?
>
>
>
>
>>
>> Have any of these Gluster issues been addressed in 4.3.2 or any other
>> releases/patches that may be available to help the problem at this time?
>>
>> Thanks!
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/746CU33TP223CFYS6BFUA2C4FIYZQMGU/
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RXHP4R5OXAJQ3SOUEKXYGOKTU43LZV3M/


[ovirt-users] Re: Self Hosted Engine failed during setup using oVirt Node 4.3

2019-03-15 Thread Jagi Sarcilla
Is there a way to specify to disable the PCID flag during Hosted Engine setup 
command line or via Cockpit, because the Processor don't support PCID flag

When the setup is trying to start up the ovirt appliance it wont start due to 
processor flag

any help very much appreciated
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RMBDRUOIFHKEXWODLMBXFGZL7HJ4FYEI/


[ovirt-users] Re: Self Hosted Engine failed during setup using oVirt Node 4.3

2019-03-15 Thread Simone Tiraboschi
On Fri, Mar 15, 2019 at 2:45 PM Jagi Sarcilla <
jagi.sarci...@cevalogistics.com> wrote:

> Is there a way to specify to disable the PCID flag during Hosted Engine
> setup command line or via Cockpit, because the Processor don't support PCID
> flag
>
> When the setup is trying to start up the ovirt appliance it wont start due
> to processor flag
>
> any help very much appreciated
>

Atom C3000 family is currently not supported: they are wrongly
detected as Westmere
and so the engine assumes that PCID is there since it's there on all the
Westmeres.
Please follow the discussion on https://bugzilla.redhat.com/1688989


I don't see any viable workaround on that HW now.


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RMBDRUOIFHKEXWODLMBXFGZL7HJ4FYEI/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6PSNENUQOUOTAUPMJ6L2NBCPZD7IXGMF/


[ovirt-users] Discard a snapshot

2019-03-15 Thread Mitja Mihelič

Hi!

We have run into a problem migrating a VM's disk. While doing a live 
disk migration it all went well until "Removing Snapshot Auto-generated 
for Live Storage Migration". The operation started and got stuck in the 
"Preparing to merge" stage. The task was visible as an async task for a 
couple of days, then dissapeared. The snapshot named "Auto-generated for 
Live Storage Migration" still exists.

If we try to start the VM it produces the following error:
VM VM_NAME_HERE is down with error. Exit message: Bad volume 
specification {u'index': 0, u'domainID': 
u'47ee9bde-4f18-43c2-9383-0d30a27d1ae7', 'reqsize': '0', u'format': 
u'cow', u'bootOrder': u'1', u'address': {u'function': u'0x0', u'bus': 
u'0x00', u'domain': u'0x', u'type': u'pci', u'slot': u'0x05'}, 
u'volumeID': u'3b0184b7-5c5d-4669-8b3a-04880883118a', 'apparentsize': 
'19193135104', u'imageID': u'35599677-250a-4586-9db1-4df8f3291164', 
u'discard': False, u'specParams': {}, u'readonly': u'false', u'iface': 
u'virtio', u'optional': u'false', u'deviceId': 
u'35599677-250a-4586-9db1-4df8f3291164', 'truesize': '19193135104', 
u'poolID': u'35628205-9fa3-41de-b914-84f9adb633e4', u'device': u'disk', 
u'shared': u'false', u'propagateErrors': u'off', u'type': u'disk'}.


From what I understand the data access path is snapshot->"disk before 
snapshot". And since the snapshot is corrupted it cannot be accessed and 
the whole VM start process fails. I believe that the disk before the 
snapshot was made is still intact.


How can we discard the whole snapshot so that are left only with the 
state before the snapshot was made?

We can afford to lose the data in the snapshot.

Best regards,
Mitja
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NVYCGZW4MICJJDECJT3VTH5DT2UHUH5X/


[ovirt-users] Re: Are people still experiencing issues with GlusterFS on 4.3x?

2019-03-15 Thread Ron Jerome
Just FYI, I have observed similar issues where a volume becomes unstable
for a period of time after the upgrade, but then seems to settle down after
a while.  I've only witnessed this in the 4.3.x versions.  I suspect it's
more of a Gluster issue than oVirt, but troubling none the less.

On Fri, 15 Mar 2019 at 09:37, Jayme  wrote:

> Yes that is correct.  I don't know if the upgrade to 4.3.1 itself caused
> issues or simply related somehow to rebooting all hosts again to apply node
> updates started causing brick issues for me again. I started having similar
> brick issues after upgrading to 4.3 originally that seemed to have
> stabilized, prior to 4.3 I never had a single glusterFS issue or brick
> offline on 4.2
>
> On Fri, Mar 15, 2019 at 9:48 AM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno ven 15 mar 2019 alle ore 13:38 Jayme  ha
>> scritto:
>>
>>> I along with others had GlusterFS issues after 4.3 upgrades, the failed
>>> to dispatch handler issue with bricks going down intermittently.  After
>>> some time it seemed to have corrected itself (at least in my enviornment)
>>> and I hadn't had any brick problems in a while.  I upgraded my three node
>>> HCI cluster to 4.3.1 yesterday and again I'm running in to brick issues.
>>> They will all be up running fine then all of a sudden a brick will randomly
>>> drop and I have to force start the volume to get it back up.
>>>
>>
>> Just to clarify, you already where on oVirt 4.3.0 + Glusterfs 5.3-1 and
>> upgraded to oVirt 4.3.1 + Glusterfs 5.3-2 right?
>>
>>
>>
>>
>>>
>>> Have any of these Gluster issues been addressed in 4.3.2 or any other
>>> releases/patches that may be available to help the problem at this time?
>>>
>>> Thanks!
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/746CU33TP223CFYS6BFUA2C4FIYZQMGU/
>>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> 
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RXHP4R5OXAJQ3SOUEKXYGOKTU43LZV3M/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6IJOSLHYWI6DRZC43VAULHTBSLM45K6Y/


[ovirt-users] Re: ovn-provider-network

2019-03-15 Thread Miguel Duarte de Mora Barroso
On Fri, Mar 15, 2019 at 3:49 PM Staniforth, Paul
 wrote:
>
> Thanks,
>   I can see now from "ovn-sbctl show" on the engine machine  that 
> 2 of our hosts haven't  deployed ovn
>
> ● ovn-controller.service - OVN controller daemon
>Loaded: loaded (/usr/lib/systemd/system/ovn-controller.service; disabled; 
> vendor preset: disabled)
>Active: inactive (dead)
> This was one of the things that was confusing me
>
> I'll see if I can deploy ovn without reinstalling, also is it possible to 
> change the deployment to use a different network rather than ovirtmgmt?

You could use vdsm-tool to reconfigure OVN on the hosts.

Please try this:

vdsm-tool ovn-config  ovirtmgmt

If you want to setup your overlay on top of a different network,
replace ovirtmgmt with that network name.

To ensure connectivity accross the cluster, make sure all your hosts
are configured to run overlays on top *of the same network*.

Make sure that other network is good to go before; I'd hate for you to
end up worse than you are now, or lose faith on ovn :)







>
> Regards,
> Paul S.
> 
> From: Miguel Duarte de Mora Barroso 
> Sent: 15 March 2019 11:28
> To: Staniforth, Paul
> Cc: users@ovirt.org
> Subject: Re: [ovirt-users] ovn-provider-network
>
> On Thu, Mar 14, 2019 at 3:04 PM Staniforth, Paul
>  wrote:
> >
> > Thanks Miguel,
> >  if we configure it connect to a physical network 
> > and select the Data Centre Network  I assume it will create the overlay 
> > network on top of that logical network.
>
> Let me clarify; the network on top of which it sets up the overlay is
> defined when the host is added, and is *only* used for inter-host
> communication. When within the same host, it simply uses the OVS
> bridge.
>
> What (I think) you mean uses the localnet feature of OVN, where the
> packets leaving the OVS bridge are forwarded to the external logical
> network you configure.
>
> These 2 concepts are unrelated.
>
>
> > Also is there any documentation about the ovn-provider-network architecture.
> >
> > Regards,
> > Paul S.
> > 
> > From: Miguel Duarte de Mora Barroso 
> > Sent: 14 March 2019 13:15
> > To: Staniforth, Paul
> > Cc: users@ovirt.org
> > Subject: Re: [ovirt-users] ovn-provider-network
> >
> > On Wed, Mar 13, 2019 at 10:08 PM Staniforth, Paul
> >  wrote:
> > >
> > > Hello,
> > >
> > >   we are using oVirt-4.2.8 and I have created a logical 
> > > network using the ovn-network-provider, I haven't configured it to 
> > > connect to a physical network.
> > >
> > >
> > > I have 2 VMs running on 2 hosts which can connect to each other this 
> > > logical network. The only connection between the hosts is over the 
> > > ovirtmgmt network so presumably the traffic is using this?
> >
> > Yes, OVN sets up an overlay network on top of ovirtmgmt network.
> >
> > >
> > >
> > > Thanks,
> > >
> > >Paul S.
> > >
> > > To view the terms under which this email is distributed, please go to:-
> > > http://leedsbeckett.ac.uk/disclaimer/email/
> > >
> > > ___
> > > Users mailing list -- users@ovirt.org
> > > To unsubscribe send an email to users-le...@ovirt.org
> > > Privacy Statement: 
> > > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fsite%2Fprivacy-policy%2F&data=02%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Cc1a89b8a39764e42ed5a08d6a9395111%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636882460976049145&sdata=7Fn7bcDC1yOgjfKewdHjExwScVuw03joXYKx16G%2BMOM%3D&reserved=0
> > > oVirt Code of Conduct: 
> > > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2F&data=02%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Cc1a89b8a39764e42ed5a08d6a9395111%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636882460976049145&sdata=rUqtChv%2FOG7HHS8gySIpHsh3s9VrqO2GzrdTO08DL4Q%3D&reserved=0
> > > List Archives: 
> > > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%2FB22LIMO6RI4SBYAOVDRWPQX3UUUYTUGL%2F&data=02%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Cc1a89b8a39764e42ed5a08d6a9395111%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636882460976049145&sdata=FtjV4dRfhlnZaJ2syoBcOCT8Nx9yS1UNjkBuX9aru0s%3D&reserved=0
> > To view the terms under which this email is distributed, please go to:-
> > http://leedsbeckett.ac.uk/disclaimer/email/
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://

[ovirt-users] Re: ovn-provider-network

2019-03-15 Thread Miguel Duarte de Mora Barroso
On Fri, Mar 15, 2019 at 5:15 PM Miguel Duarte de Mora Barroso
 wrote:
>
> On Fri, Mar 15, 2019 at 3:49 PM Staniforth, Paul
>  wrote:
> >
> > Thanks,
> >   I can see now from "ovn-sbctl show" on the engine machine  
> > that 2 of our hosts haven't  deployed ovn
> >
> > ● ovn-controller.service - OVN controller daemon
> >Loaded: loaded (/usr/lib/systemd/system/ovn-controller.service; 
> > disabled; vendor preset: disabled)
> >Active: inactive (dead)
> > This was one of the things that was confusing me
> >
> > I'll see if I can deploy ovn without reinstalling, also is it possible to 
> > change the deployment to use a different network rather than ovirtmgmt?
>
> You could use vdsm-tool to reconfigure OVN on the hosts.
>
> Please try this:
>
> vdsm-tool ovn-config  ovirtmgmt
>
> If you want to setup your overlay on top of a different network,
> replace ovirtmgmt with that network name.
>
> To ensure connectivity accross the cluster, make sure all your hosts
> are configured to run overlays on top *of the same network*.
>
> Make sure that other network is good to go before; I'd hate for you to
> end up worse than you are now, or lose faith on ovn :)

I think I wasn't adamant enough here; please try to get OVN controller
configure using ovirtmgmt network before.

If that works as expected, you can later on evaluate if setting the
overlay on top of a different network is beneficial.

>
>
>
>
>
>
>
> >
> > Regards,
> > Paul S.
> > 
> > From: Miguel Duarte de Mora Barroso 
> > Sent: 15 March 2019 11:28
> > To: Staniforth, Paul
> > Cc: users@ovirt.org
> > Subject: Re: [ovirt-users] ovn-provider-network
> >
> > On Thu, Mar 14, 2019 at 3:04 PM Staniforth, Paul
> >  wrote:
> > >
> > > Thanks Miguel,
> > >  if we configure it connect to a physical network 
> > > and select the Data Centre Network  I assume it will create the overlay 
> > > network on top of that logical network.
> >
> > Let me clarify; the network on top of which it sets up the overlay is
> > defined when the host is added, and is *only* used for inter-host
> > communication. When within the same host, it simply uses the OVS
> > bridge.
> >
> > What (I think) you mean uses the localnet feature of OVN, where the
> > packets leaving the OVS bridge are forwarded to the external logical
> > network you configure.
> >
> > These 2 concepts are unrelated.
> >
> >
> > > Also is there any documentation about the ovn-provider-network 
> > > architecture.
> > >
> > > Regards,
> > > Paul S.
> > > 
> > > From: Miguel Duarte de Mora Barroso 
> > > Sent: 14 March 2019 13:15
> > > To: Staniforth, Paul
> > > Cc: users@ovirt.org
> > > Subject: Re: [ovirt-users] ovn-provider-network
> > >
> > > On Wed, Mar 13, 2019 at 10:08 PM Staniforth, Paul
> > >  wrote:
> > > >
> > > > Hello,
> > > >
> > > >   we are using oVirt-4.2.8 and I have created a logical 
> > > > network using the ovn-network-provider, I haven't configured it to 
> > > > connect to a physical network.
> > > >
> > > >
> > > > I have 2 VMs running on 2 hosts which can connect to each other this 
> > > > logical network. The only connection between the hosts is over the 
> > > > ovirtmgmt network so presumably the traffic is using this?
> > >
> > > Yes, OVN sets up an overlay network on top of ovirtmgmt network.
> > >
> > > >
> > > >
> > > > Thanks,
> > > >
> > > >Paul S.
> > > >
> > > > To view the terms under which this email is distributed, please go to:-
> > > > http://leedsbeckett.ac.uk/disclaimer/email/
> > > >
> > > > ___
> > > > Users mailing list -- users@ovirt.org
> > > > To unsubscribe send an email to users-le...@ovirt.org
> > > > Privacy Statement: 
> > > > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fsite%2Fprivacy-policy%2F&data=02%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Cc1a89b8a39764e42ed5a08d6a9395111%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636882460976049145&sdata=7Fn7bcDC1yOgjfKewdHjExwScVuw03joXYKx16G%2BMOM%3D&reserved=0
> > > > oVirt Code of Conduct: 
> > > > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2F&data=02%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Cc1a89b8a39764e42ed5a08d6a9395111%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636882460976049145&sdata=rUqtChv%2FOG7HHS8gySIpHsh3s9VrqO2GzrdTO08DL4Q%3D&reserved=0
> > > > List Archives: 
> > > > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%2FB22LIMO6RI4SBYAOVDRWPQX3UUUYTUGL%2F&data=02%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Cc1a89b8a39764e42ed5a08d6a9395111%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636882460976049145&sdata=FtjV4dRfhlnZaJ2syoBcOCT8Nx9yS1UNjkBuX9aru0s%3D&reserved=0
> > > To view the terms und

[ovirt-users] Re: Are people still experiencing issues with GlusterFS on 4.3x?

2019-03-15 Thread Jayme
That is essentially the behaviour that I've seen.  I wonder if perhaps it
could be related to the increased heal activity that occurs on the volumes
during reboots of nodes after updating.

On Fri, Mar 15, 2019 at 12:43 PM Ron Jerome  wrote:

> Just FYI, I have observed similar issues where a volume becomes unstable
> for a period of time after the upgrade, but then seems to settle down after
> a while.  I've only witnessed this in the 4.3.x versions.  I suspect it's
> more of a Gluster issue than oVirt, but troubling none the less.
>
> On Fri, 15 Mar 2019 at 09:37, Jayme  wrote:
>
>> Yes that is correct.  I don't know if the upgrade to 4.3.1 itself caused
>> issues or simply related somehow to rebooting all hosts again to apply node
>> updates started causing brick issues for me again. I started having similar
>> brick issues after upgrading to 4.3 originally that seemed to have
>> stabilized, prior to 4.3 I never had a single glusterFS issue or brick
>> offline on 4.2
>>
>> On Fri, Mar 15, 2019 at 9:48 AM Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> Il giorno ven 15 mar 2019 alle ore 13:38 Jayme  ha
>>> scritto:
>>>
 I along with others had GlusterFS issues after 4.3 upgrades, the failed
 to dispatch handler issue with bricks going down intermittently.  After
 some time it seemed to have corrected itself (at least in my enviornment)
 and I hadn't had any brick problems in a while.  I upgraded my three node
 HCI cluster to 4.3.1 yesterday and again I'm running in to brick issues.
 They will all be up running fine then all of a sudden a brick will randomly
 drop and I have to force start the volume to get it back up.

>>>
>>> Just to clarify, you already where on oVirt 4.3.0 + Glusterfs 5.3-1 and
>>> upgraded to oVirt 4.3.1 + Glusterfs 5.3-2 right?
>>>
>>>
>>>
>>>

 Have any of these Gluster issues been addressed in 4.3.2 or any other
 releases/patches that may be available to help the problem at this time?

 Thanks!
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/746CU33TP223CFYS6BFUA2C4FIYZQMGU/

>>>
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>>
>>> Red Hat EMEA 
>>>
>>> sbona...@redhat.com
>>> 
>>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RXHP4R5OXAJQ3SOUEKXYGOKTU43LZV3M/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSZ3ROIE6NXIGWHG5KYVE33DBOFUWGJU/


[ovirt-users] Broken dependencies in CentOS7 hosts upgrading to 4.2.8 from 4.2.7

2019-03-15 Thread Roberto Nunin
Hi
I have some oVirt clusters, in various config.

One cluster based on CentOS7 hosts, another based on ovirt-node-ng.
While the second was successfully updated from 4.2.7 to 4.2.8, attempts to
update hosts of the first one ends with:

Error: Package: vdsm-4.20.46-1.el7.x86_64 (ovirt-4.2)
   Requires: libvirt-daemon-kvm >= 4.5.0-10.el7_6.3
   Installed: libvirt-daemon-kvm-4.5.0-10.el7.x86_64 (@base)
   libvirt-daemon-kvm = 4.5.0-10.el7

Being a CentOS7 installation and not an ovirt-node-ng, I cannot follow the
notice in :
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJUBXAIGXVD5U5O2VYHO3BONVROSGWNW/

The only way to have libvirt-daemon-kvm release 4-5-0.10_6.4 (and not 6.3)
is to enable CentOS Updates repo. Looking at host-deploy log looks fine,
but It's safe to enable that repo ? There is another, safest method to
update these hosts to the latest version of 4.2 ?

Thanks in advance

-- 
Roberto Nunin
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XX76B7KHT5TVX675UMLSDEHE4NEB4E4P/


[ovirt-users] Re: ovn-provider-network

2019-03-15 Thread Staniforth, Paul
I've now got the second host working, the directory /etc/openvswitch/ was owned 
by root instead of openvswitch:openvswitch


Thanks,
   Paul S.
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CBAZGPI6OVYURV4AGGGIFMLH4J4GEUCC/


[ovirt-users] Re: Are people still experiencing issues with GlusterFS on 4.3x?

2019-03-15 Thread Simon Coter
Hi,

something that I’m seeing in the vdsm.log, that I think is gluster related is 
the following message:

2019-03-15 05:58:28,980-0700 INFO  (jsonrpc/6) [root] managedvolume not 
supported: Managed Volume Not Supported. Missing package os-brick.: ('Cannot 
import os_brick',) (caps:148)

os_brick seems something available by openstack channels but I didn’t verify.

Simon

> On Mar 15, 2019, at 1:54 PM, Sandro Bonazzola  wrote:
> 
> 
> 
> Il giorno ven 15 mar 2019 alle ore 13:46 Strahil Nikolov 
> mailto:hunter86...@yahoo.com>> ha scritto:
> 
> >I along with others had GlusterFS issues after 4.3 upgrades, the failed to 
> >dispatch handler issue with bricks going down intermittently.  After some 
> >time it seemed to have corrected itself (at least in my enviornment) and I 
> >>hadn't had any brick problems in a while.  I upgraded my three node HCI 
> >cluster to 4.3.1 yesterday and again I'm running in to brick issues.  They 
> >will all be up running fine then all of a sudden a brick will randomly drop 
> >>and I have to force start the volume to get it back up. 
> >
> >Have any of these Gluster issues been addressed in 4.3.2 or any other 
> >releases/patches that may be available to help the problem at this time?
> >
> >Thanks!
> 
> Yep,
> 
> sometimes a brick dies (usually my ISO domain ) and then I have to "gluster 
> volume start isos force".
> Sadly I had several issues with 4.3.X - problematic OVF_STORE (0 bytes), 
> issues with gluster , out-of-sync network - so for me 4.3.0 & 4.3.0 are quite 
> unstable.
> 
> Is there a convention indicating stability ? Is 4.3.xxx means unstable , 
> while 4.2.yyy means stable ?
> 
> No, there's no such convention. 4.3 is supposed to be stable and production 
> ready.
> The fact it isn't stable enough for all the cases means it has not been 
> tested for those cases.
> In oVirt 4.3.1 RC cycle testing 
> (https://trello.com/b/5ZNJgPC3/ovirt-431-test-day-1 
>  ) we got participation 
> of only 6 people and not even all the tests have been completed.
> Help testing during release candidate phase helps having more stable final 
> releases.
> oVirt 4.3.2 is at its second release candidate, if you have time and 
> resource, it would be helpful testing it on an environment which is similar 
> to your production environment and give feedback / report bugs.
> 
> Thanks
> 
>  
> 
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACQE2DCN2LP3RPIPZNXYSLCBXZ4VOPX2/
>  
> 
> 
> 
> -- 
> SANDRO BONAZZOLA
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
> Red Hat EMEA 
> sbona...@redhat.com    
> 
>  
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UPPMAKYNGWB6F4GPZTHOY4QC6GGO66CX/
>  
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YK2ZC5JIMGLGCMATSHCOXWR6BKJOH6YN/


[ovirt-users] Re: Self Hosted Engine failed during setup using oVirt Node 4.3

2019-03-15 Thread Jagi Sarcilla
Thank you for the information, will monitor the bz update
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VJDYTDAJSCMO6YLBZPPPAMSYIV4T7VB7/


[ovirt-users] Libgfapisupport messes disk image ownership

2019-03-15 Thread Hesham Ahmed
I had reported this here: https://bugzilla.redhat.com/show_bug.cgi?id=1687126

Has anyone else faced this with 4.3.1?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBIASF6YXLOHVKHYRSEFGSPBKH52OSYX/


[ovirt-users] Re: ovn-provider-network

2019-03-15 Thread Staniforth, Paul
Thanks,
  I can see now from "ovn-sbctl show" on the engine machine  that 2 
of our hosts haven't  deployed ovn

● ovn-controller.service - OVN controller daemon
   Loaded: loaded (/usr/lib/systemd/system/ovn-controller.service; disabled; 
vendor preset: disabled)
   Active: inactive (dead)
This was one of the things that was confusing me

I'll see if I can deploy ovn without reinstalling, also is it possible to 
change the deployment to use a different network rather than ovirtmgmt?

Regards,
Paul S.

From: Miguel Duarte de Mora Barroso 
Sent: 15 March 2019 11:28
To: Staniforth, Paul
Cc: users@ovirt.org
Subject: Re: [ovirt-users] ovn-provider-network

On Thu, Mar 14, 2019 at 3:04 PM Staniforth, Paul
 wrote:
>
> Thanks Miguel,
>  if we configure it connect to a physical network and 
> select the Data Centre Network  I assume it will create the overlay network 
> on top of that logical network.

Let me clarify; the network on top of which it sets up the overlay is
defined when the host is added, and is *only* used for inter-host
communication. When within the same host, it simply uses the OVS
bridge.

What (I think) you mean uses the localnet feature of OVN, where the
packets leaving the OVS bridge are forwarded to the external logical
network you configure.

These 2 concepts are unrelated.


> Also is there any documentation about the ovn-provider-network architecture.
>
> Regards,
> Paul S.
> 
> From: Miguel Duarte de Mora Barroso 
> Sent: 14 March 2019 13:15
> To: Staniforth, Paul
> Cc: users@ovirt.org
> Subject: Re: [ovirt-users] ovn-provider-network
>
> On Wed, Mar 13, 2019 at 10:08 PM Staniforth, Paul
>  wrote:
> >
> > Hello,
> >
> >   we are using oVirt-4.2.8 and I have created a logical network 
> > using the ovn-network-provider, I haven't configured it to connect to a 
> > physical network.
> >
> >
> > I have 2 VMs running on 2 hosts which can connect to each other this 
> > logical network. The only connection between the hosts is over the 
> > ovirtmgmt network so presumably the traffic is using this?
>
> Yes, OVN sets up an overlay network on top of ovirtmgmt network.
>
> >
> >
> > Thanks,
> >
> >Paul S.
> >
> > To view the terms under which this email is distributed, please go to:-
> > http://leedsbeckett.ac.uk/disclaimer/email/
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: 
> > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fsite%2Fprivacy-policy%2F&data=02%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Cc1a89b8a39764e42ed5a08d6a9395111%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636882460976049145&sdata=7Fn7bcDC1yOgjfKewdHjExwScVuw03joXYKx16G%2BMOM%3D&reserved=0
> > oVirt Code of Conduct: 
> > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2F&data=02%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Cc1a89b8a39764e42ed5a08d6a9395111%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636882460976049145&sdata=rUqtChv%2FOG7HHS8gySIpHsh3s9VrqO2GzrdTO08DL4Q%3D&reserved=0
> > List Archives: 
> > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%2FB22LIMO6RI4SBYAOVDRWPQX3UUUYTUGL%2F&data=02%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Cc1a89b8a39764e42ed5a08d6a9395111%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636882460976049145&sdata=FtjV4dRfhlnZaJ2syoBcOCT8Nx9yS1UNjkBuX9aru0s%3D&reserved=0
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YN5ZSTSH766Z565BEX372OLZNF2IAVOE/


[ovirt-users] Re: Are people still experiencing issues with GlusterFS on 4.3x?

2019-03-15 Thread Darrell Budic
Upgrading gluster from version 3.12 or 4.1 (included in ovirt 3.x) to 5.3 (in 
ovirt 4.3) seems to cause this due to a bug in the gluster upgrade process. 
It’s an unfortunate side effect fo us upgrading ovirt hyper-converged systems. 
Installing new should be fine, but I’d wait for gluster to get 
https://bugzilla.redhat.com/show_bug.cgi?id=1684385 
 included in the version 
ovirt installs before installing a hyper converged cluster. 

I just upgraded my 4.2.8 cluster to 4.3.1, leaving my separate gluster 3.12.15 
servers along, and it worked fine. Except for a different bug screwing up HA 
engine permissions on launch, but it looks like that’s getting fixed on a 
different bug.

Sandro, it’s unfortunate I can’t take more part in testing days, but the 
haven’t been happening at times where I can participate, and a one test test 
isn’t really something i can participate in often. I sometimes try and keep up 
with the RCs on my test cluster, but major version changes wait until I get 
time to consider it, unfortunately. I’m also a little surprised that a major 
upstream issue like that bug hasn’t caused you to issue more warnings, it’s 
something that is going to affect everyone who’s upgrading a converged system. 
Any discussion on why more news wasn’t released about it?

  -Darrell


> On Mar 15, 2019, at 11:50 AM, Jayme  wrote:
> 
> That is essentially the behaviour that I've seen.  I wonder if perhaps it 
> could be related to the increased heal activity that occurs on the volumes 
> during reboots of nodes after updating.
> 
> On Fri, Mar 15, 2019 at 12:43 PM Ron Jerome  > wrote:
> Just FYI, I have observed similar issues where a volume becomes unstable for 
> a period of time after the upgrade, but then seems to settle down after a 
> while.  I've only witnessed this in the 4.3.x versions.  I suspect it's more 
> of a Gluster issue than oVirt, but troubling none the less.  
> 
> On Fri, 15 Mar 2019 at 09:37, Jayme  > wrote:
> Yes that is correct.  I don't know if the upgrade to 4.3.1 itself caused 
> issues or simply related somehow to rebooting all hosts again to apply node 
> updates started causing brick issues for me again. I started having similar 
> brick issues after upgrading to 4.3 originally that seemed to have 
> stabilized, prior to 4.3 I never had a single glusterFS issue or brick 
> offline on 4.2
> 
> On Fri, Mar 15, 2019 at 9:48 AM Sandro Bonazzola  > wrote:
> 
> 
> Il giorno ven 15 mar 2019 alle ore 13:38 Jayme  > ha scritto:
> I along with others had GlusterFS issues after 4.3 upgrades, the failed to 
> dispatch handler issue with bricks going down intermittently.  After some 
> time it seemed to have corrected itself (at least in my enviornment) and I 
> hadn't had any brick problems in a while.  I upgraded my three node HCI 
> cluster to 4.3.1 yesterday and again I'm running in to brick issues.  They 
> will all be up running fine then all of a sudden a brick will randomly drop 
> and I have to force start the volume to get it back up. 
> 
> Just to clarify, you already where on oVirt 4.3.0 + Glusterfs 5.3-1 and 
> upgraded to oVirt 4.3.1 + Glusterfs 5.3-2 right?
> 
> 
>  
> 
> Have any of these Gluster issues been addressed in 4.3.2 or any other 
> releases/patches that may be available to help the problem at this time?
> 
> Thanks!
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/746CU33TP223CFYS6BFUA2C4FIYZQMGU/
>  
> 
> 
> 
> -- 
> SANDRO BONAZZOLA
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
> Red Hat EMEA 
> sbona...@redhat.com    
> 
>  
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RXHP4R5OXAJQ3SOUEKXYGOKTU43LZV3M/
>  
> 

[ovirt-users] Re: Libgfapisupport messes disk image ownership

2019-03-15 Thread Darrell Budic
You may have this one instead. I just encountered it last night, still seems to 
be an issue.

https://bugzilla.redhat.com/show_bug.cgi?id=1666795

> On Mar 15, 2019, at 4:25 PM, Hesham Ahmed  wrote:
> 
> I had reported this here: https://bugzilla.redhat.com/show_bug.cgi?id=1687126
> 
> Has anyone else faced this with 4.3.1?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBIASF6YXLOHVKHYRSEFGSPBKH52OSYX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KUNUCXOGU6GADJRQALBJPCVVDJ4UUKHF/


[ovirt-users] self-hosted ovirt-engine down

2019-03-15 Thread siovelrm
Hi, I have a big problem with ovirt. I use version 4.2.7 with self-hosted. The 
problem is that when I try to raise the vm of the ovirt-engine with the 
command: hosted-engine --vm-start, it appears in the output
"VM exists and is down, cleaning up and restarting"
when running: hosted-engine --vm-status appears
--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : node1.softel.cu
Host ID: 1
Engine status  : {"reason": "bad vm status", "health": 
"bad", "vm": "down_unexpected", "detail": "Down"}
Score  : 0
stopped: False
Local maintenance  : False
crc32  : 02c3b5a4
local_conf_timestamp   : 49529
Host timestamp : 49529
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=49529 (Sat Mar 16 02:39:10 2019)
host-id=1
score=0
vm_conf_refresh_time=49529 (Sat Mar 16 02:39:11 2019)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Thu Jan  1 08:49:39 1970

in /var/log/messages
Mar 16 02:35:34 node1 vdsm[26151]: WARN Attempting to remove a non existing 
network: ovirtmgmt/0c3e1c08-3928-47f1-96a8-c6a8d0dc3241
Mar 16 02:35:34 node1 vdsm[26151]: WARN Attempting to remove a non existing net 
user: ovirtmgmt/0c3e1c08-3928-47f1-96a8-c6a8d0dc3241
Mar 16 02:35:34 node1 vdsm[26151]: WARN Attempting to remove a non existing 
network: ovirtmgmt/0c3e1c08-3928-47f1-96a8-c6a8d0dc3241
Mar 16 02:35:34 node1 vdsm[26151]: WARN Attempting to remove a non existing net 
user: ovirtmgmt/0c3e1c08-3928-47f1-96a8-c6a8d0dc3241
Mar 16 02:35:34 node1 vdsm[26151]: WARN File: 
/var/lib/libvirt/qemu/channels/0c3e1c08-3928-47f1-96a8-c6a8d0dc3241.org.qemu.guest_agent.0
 already removed

Please help!!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OTXJFCVCAHMYZJ6SLP3NZ6F6TDJ5HIFR/