[ovirt-users] Re: Problems after 4.3.8 update

2019-12-14 Thread hunter86_bg
 I don't know. I had the same issues when I migrated my gluster from v6.5 to 
6.6 (currently running v7.0).
Just get the newest file and rsync it to the rest of the bricks. It will solve 
the '?? ??' problem.

Best Regards,
Strahil Nikolov
 В неделя, 15 декември 2019 г., 3:49:27 ч. Гринуич+2, Jayme 
 написа:  
 
 on that page it says to check open bugs and the migration bug you mention does 
not appear to be on the list.  Has it been resolved or is it just missing from 
this page?
On Sat, Dec 14, 2019 at 7:53 PM Strahil Nikolov  wrote:

 Nah... this is not gonna fix your issue and is unnecessary.Just compare the 
data from all bricks ... most probably the 'Last Updated' is different and the 
gfid of the file is different.Find the brick that has the most fresh data, and 
replace (move away as a backup and rsync) the file from last good copy to the 
other bricks.You can also run a 'full heal'.
Best Regards,Strahil Nikolov
В събота, 14 декември 2019 г., 21:18:44 ч. Гринуич+2, Jayme 
 написа:  
 
 *Update* 
Situation has improved.  All VMs and engine are running.  I'm left right now 
with about 2 heal entries in each glusterfs storage volume that will not heal. 
In all cases each heal entry is related to an OVF_STORE image and the problem 
appears to be an issue with the gluster metadata for those ovf_store images.  
When I look at the files shown in gluster volume heal info output I'm seeing 
question marks on the meta files which indicates an attribute/gluster problem 
(even though there is no split-brain).  And I get input/output error when 
attempting to do anything with the files.
If I look at the files on each host in /gluster_bricks they all look fine.  I 
only see question marks on the meta files when look at the file in /rhev mounts
Does anyone know how I can correct the attributes on these OVF_STORE files?  
I've tried putting each host in maintenance and re-activating to re-mount 
gluster volumes.  I've also stopped and started all gluster volumes.  
I'm thinking I might be able to solve this by shutting down all VMs and placing 
all hosts in maintenance and safely restarting the entire cluster.. but that 
may not be necessary?  
On Fri, Dec 13, 2019 at 12:59 AM Jayme  wrote:

I believe I was able to get past this by stopping the engine volume then 
unmounting the glusterfs engine mount on all hosts and re-starting the volume. 
I was able to start hostedengine on host0.
I'm still facing a few problems:
1. I'm still seeing this issue in each host's logs:
Dec 13 00:57:54 orchard0 journal: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Failed 
scanning for OVF_STORE due to Command Volume.getInfo with args 
{'storagepoolID': '----', 'storagedomainID': 
'd70b171e-7488-4d52-8cad-bbc581dbf16e', 'volumeID': 
u'2632f423-ed89-43d9-93a9-36738420b866', 'imageID': 
u'd909dc74-5bbd-4e39-b9b5-755c167a6ee8'} failed:#012(code=201, message=Volume 
does not exist: (u'2632f423-ed89-43d9-93a9-36738420b866',))
Dec 13 00:57:54 orchard0 journal: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Unable 
to identify the OVF_STORE volume, falling back to initial vm.conf. Please 
ensure you already added your first data domain for regular VMs


2. Most of my gluster volumes still have un-healed entires which I can't seem 
to heal.  I'm not sure what the answer is here.
On Fri, Dec 13, 2019 at 12:33 AM Jayme  wrote:

I was able to get the hosted engine started manually via Virsh after 
re-creating a missing symlink in /var/run/vdsm/storage -- I later shut it down 
and am still having the same problem with ha broker starting.  It appears that 
the problem *might* be with a corrupt ha metadata file, although gluster is not 
stating there is split brain on the engine volume
I'm seeing this:
ls -al 
/rhev/data-center/mnt/glusterSD/orchard0\:_engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/
ls: cannot access 
/rhev/data-center/mnt/glusterSD/orchard0:_engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/hosted-engine.metadata:
 Input/output error
total 0
drwxr-xr-x. 2 vdsm kvm  67 Dec 13 00:30 .
drwxr-xr-x. 6 vdsm kvm  64 Aug  6  2018 ..
lrwxrwxrwx. 1 vdsm kvm 132 Dec 13 00:30 hosted-engine.lockspace -> 
/var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/03a8ee8e-91f5-4e06-904b-9ed92a9706eb/db2699ce-6349-4020-b52d-8ab11d01e26d
l?? ? ?    ?     ?            ? hosted-engine.metadata

Clearly showing some sort of issue with hosted-engine.metadata on the client 
mount.  
on each node in /gluster_bricks I see this:
# ls -al 
/gluster_bricks/engine/engine/d70b171e-7488-4d52-8cad-bbc581dbf16e/ha_agent/
total 0
drwxr-xr-x. 2 vdsm kvm  67 Dec 13 00:31 .
drwxr-xr-x. 6 vdsm kvm  64 Aug  6  2018 ..
lrwxrwxrwx. 2 vdsm kvm 132 Dec 13 00:31 hosted-engine.lockspace -> 
/var/run/vdsm/storage/d70b171e-7488-4d52-8cad-bbc581dbf16e/03a8ee8e-91f5-4e06-904b-9ed92a9706eb/db2699ce-6349-4020-b52d-8ab11d01e26d
lrwx

[ovirt-users] Re: Need to enable STP on ovirt bridges

2019-08-24 Thread hunter86_bg
 Well, this proves that the issue is not the bandwidth usage, but something 
else.

My personal opinion is that you should change the colocation - if that is an 
option at all ...

Best Regards,
Strahil Nikolov В събота, 24 август 2019 г., 22:16:18 ч. Гринуич+3, Curtis 
E. Combs Jr.  написа:  
 
 I applied a 90Mbs QOS Rate Limit with 10 set for the shares to both
interfaces of 2 of the hosts. My hosts names are swm-01 and swm-02.

Creating a small VM from a Cinder template and running it gave me a test VM.

When I migrated from swm-01 to swm-02, swm-01 immediately became
unresponsive to pings, SSH'es, and to the ovirt interface which marked
it as "NonResponsive" soon after the VM finished. The VM did finish
migrating, however I'm unsure if that's a good migration or not.

Thank you, Strahil.

On Sat, Aug 24, 2019 at 12:39 PM Strahil  wrote:
>
> What is your bandwidth threshold for the network used for VM migration ?
> Can you set a 90 mbit/s threshold (yes, less than 100mbit/s) and try to 
> migrate a small (1 GB RAM) VM ?
>
> Do you see disconnects ?
>
> If no, try a little bit up (the threshold)  and check again.
>
> Best Regards,
> Strahil NikolovOn Aug 23, 2019 23:19, "Curtis E. Combs Jr." 
>  wrote:
> >
> > It took a while for my servers to come back on the network this time.
> > I think it's due to ovirt continuing to try to migrate the VMs around
> > like I requested. The 3 servers' names are "swm-01, swm-02 and
> > swm-03". Eventually (about 2-3 minutes ago) they all came back online.
> >
> > So I disabled and stopped the lldpad service.
> >
> > Nope. Started some more migrations and swm-02 and swm-03 disappeared
> > again. No ping, SSH hung, same as before - almost as soon as the
> > migration started.
> >
> > If you wall have any ideas what switch-level setting might be enabled,
> > let me know, cause I'm stumped. I can add it to the ticket that's
> > requesting the port configurations. I've already added the port
> > numbers and switch name that I got from CDP.
> >
> > Thanks again, I really appreciate the help!
> > cecjr
> >
> >
> >
> > On Fri, Aug 23, 2019 at 3:28 PM Dominik Holler  wrote:
> > >
> > >
> > >
> > > On Fri, Aug 23, 2019 at 9:19 PM Dominik Holler  wrote:
> > >>
> > >>
> > >>
> > >> On Fri, Aug 23, 2019 at 8:03 PM Curtis E. Combs Jr. 
> > >>  wrote:
> > >>>
> > >>> This little cluster isn't in production or anything like that yet.
> > >>>
> > >>> So, I went ahead and used your ethtool commands to disable pause
> > >>> frames on both interfaces of each server. I then, chose a few VMs to
> > >>> migrate around at random.
> > >>>
> > >>> swm-02 and swm-03 both went out again. Unreachable. Can't ping, can't
> > >>> ssh, and the SSH session that I had open was unresponsive.
> > >>>
> > >>> Any other ideas?
> > >>>
> > >>
> > >> Sorry, no. Looks like two different NICs with different drivers and 
> > >> frimware goes down together.
> > >> This is a strong indication that the root cause is related to the switch.
> > >> Maybe you can get some information about the switch config by
> > >> 'lldptool get-tlv -n -i em1'
> > >>
> > >
> > > Another guess:
> > > After the optional 'lldptool get-tlv -n -i em1'
> > > 'systemctl stop lldpad'
> > > another try to migrate.
> > >
> > >
> > >>
> > >>
> > >>>
> > >>> On Fri, Aug 23, 2019 at 1:50 PM Dominik Holler  
> > >>> wrote:
> > >>> >
> > >>> >
> > >>> >
> > >>> > On Fri, Aug 23, 2019 at 6:45 PM Curtis E. Combs Jr. 
> > >>> >  wrote:
> > >>> >>
> > >>> >> Unfortunately, I can't check on the switch. Trust me, I've tried.
> > >>> >> These servers are in a Co-Lo and I've put 5 tickets in asking about
> > >>> >> the port configuration. They just get ignored - but that's par for 
> > >>> >> the
> > >>> >> coarse for IT here. Only about 2 out of 10 of our tickets get any
> > >>> >> response and usually the response doesn't help. Then the system they
> > >>> >> use auto-closes the ticket. That was why I was suspecting STP before.
> > >>> >>
> > >>> >> I can do ethtool. I do have root on these servers, though. Are you
> > >>> >> trying to get me to turn off link-speed auto-negotiation? Would you
> > >>> >> like me to try that?
> > >>> >>
> > >>> >
> > >>> > It is just a suspicion, that the reason is pause frames.
> > >>> > Let's start on a NIC which is not used for ovirtmgmt, I guess em1.
> > >>> > Does 'ethtool -S em1  | grep pause' show something?
> > >>> > Does 'ethtool em1 | grep pause' indicates support for pause?
> > >>> > The current config is shown by 'ethtool -a em1'.
> > >>> > '-A autoneg' "Specifies whether pause autonegotiation should be 
> > >>> > enabled." according to ethtool doc.
> > >>> > Assuming flow control is enabled by default, I would try to  disable 
> > >>> > it via
> > >>> > 'ethtool -A em1 autoneg off rx off tx off'
> > >>> > and check if it is applied via
> > >>> > 'ethtool -a em1'
> > >>> > and check if the behavior under load changes.
> > >>> >
> > >>> >
> > >>> >
> > >>> >>
> > >>> >> On Fri, Aug 23, 2019 at 12:24 PM Domin

[ovirt-users] Re: Moving ovirt engine disk to another storage volume

2019-08-24 Thread hunter86_bg
 Not exactly...
You need to access access.redhat.com and once you get prompted and agree with 
the Red Hat's terms - you will be able to acccess downloads (ofx no support) 
and Red Hat solutions.

Best Regards,
Strahil Nikolov
 В събота, 24 август 2019 г., 21:00:02 ч. Гринуич+3, Erick Perez - Quadrian 
Enterprises  написа:  
 
 Seems its not true:
"You're attempting to access content that requires a Red Hat login
with a complete profile. "
It seems developer profiles are "not" complete profiles.

-
Erick Perez
Soluciones Tacticas Pasivas/Activas de Inteligencia y Analitica de
Datos para Gobiernos
Quadrian Enterprises S.A. - Panama, Republica de Panama
Skype chat: eaperezh
WhatsApp IM: +507-6675-5083
-

On Sat, Aug 24, 2019 at 11:44 AM Scott Worthington
 wrote:
>
> Subscriptions are free, please join the developer program with red hat (also 
> free) to see the article.
>
> On Sat, Aug 24, 2019, 12:11 PM Erick Perez - Quadrian Enterprises 
>  wrote:
>>
>> i found this article link from redhat. Unfortunately needs subscription:
>> https://access.redhat.com/solutions/2998291
>>
>>
>> -
>> Erick Perez
>> Soluciones Tacticas Pasivas/Activas de Inteligencia y Analitica de
>> Datos para Gobiernos
>> Quadrian Enterprises S.A. - Panama, Republica de Panama
>> Skype chat: eaperezh
>> WhatsApp IM: +507-6675-5083
>> -
>>
>> On Sat, Aug 24, 2019 at 10:43 AM Erick Perez - Quadrian Enterprises
>>  wrote:
>> >
>> > Good morning,
>> >
>> > I am running Ovirt 4.3.5 in Centos 7.6 with one virt node an a NFS
>> > storage node. I did the self-hosted engine setup and i plan to add a
>> > second virt host in a few days.
>> >
>> > I need to do heavy maintenance on the storage node (VDO and mdadm
>> > things) and would like to know how (or a link to an article) can I
>> > move the ovirt engine disk to another storage.
>> >
>> > Currentl the NFS storage has two volumens (volA,volB) and the physical
>> > host have spare space too. Virtual machines are in VolB and the engine
>> > is in VolA.
>> >
>> > I would like to move the engine disk from VolA to VolB or to local storage.
>> >
>> > BTW I am not sure if I should say "move the engine" or should I say
>> > move "hosted_storage domain" domain
>> >
>> > thanks in advance.
>> >
>> > -
>> > Erick Perez
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MQCFO6SE62C6GFVVWJIQMUHJERYNSEWR/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZOCPUSLSXRLLX3X63JCQOCGPB23LV5HI/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GPNUJEWXX6GMMB5S2SH56ULJIPZDLO4J/


[ovirt-users] HyperConverged Self-Hosted deployment fails

2019-01-19 Thread hunter86_bg
Hello Community,

recently I managed somehow to deploy a 2 node cluster on GlusterFS , but after 
a serious engine failiure - I have decided to start from scratch.
What I have done so far:
1. Inctall CentOS7 from scratch
2. Add ovirt repositories, vdo,cockpit for ovirt
3. Deployed the gluster cluster using cockpit 
4. Trying to deploy the hosted-engine , which has failed several times.

Up to now I have detected that ovirt-ha-agent is giving:

яну 19 13:54:57 ovirt1.localdomain ovirt-ha-agent[16992]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call 
last):
   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 
131, in _run_agent
 return 
action(he)
   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 
55, in action_proper
 return 
he.start_monitoring()
   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
 line 413, in start_monitoring
 
self._initialize_broker()
   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
 line 535, in _initialize_broker
 
m.get('options', {}))
   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 83, in start_monitor
 .format(type, 
options, e ))
 RequestError: 
Failed to start monitor ping, options {'addr': '192.168.1.1'}: [Errno 2] No 
such file or directory

According to https://access.redhat.com/solutions/3353391 , the 
/etc/ovirt-hosted-engine/hosted-engine.conf should be empty , but it's OK:

[root@ovirt1 tmp]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
fqdn=engine.localdomain
vm_disk_id=bb0a9839-a05d-4d0a-998c-74da539a9574
vm_disk_vol_id=c1fc3c59-bc6e-4b74-a624-557a1a62a34f
vmid=d0e695da-ec1a-4d6f-b094-44a8cac5f5cd
storage=ovirt1.localdomain:/engine
nfs_version=
mnt_options=backup-volfile-servers=ovirt2.localdomain:ovirt3.localdomain
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=1
console=vnc
domainType=glusterfs
spUUID=----
sdUUID=444e524e-9008-48f8-b842-1ce7b95bf248
connectionUUID=e29cf818-5ee5-46e1-85c1-8aeefa33e95d
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject="C=EN, L=Test, O=Test, CN=Test"
vdsm_use_ssl=true
gateway=192.168.1.1
bridge=ovirtmgmt
metadata_volume_UUID=a3be2390-017f-485b-8f42-716fb6094692
metadata_image_UUID=368fb8dc-6049-4ef0-8cf8-9d3c4d772d59
lockspace_volume_UUID=41762f85-5d00-488f-bcd0-3de49ec39e8b
lockspace_image_UUID=de100b9b-07ac-4986-9d86-603475572510
conf_volume_UUID=4306f6d6-7fe9-499d-81a5-6b354e8ecb79
conf_image_UUID=d090dd3f-fc62-442a-9710-29eeb56b0019

# The following are used only for iSCSI storage
iqn=
portal=
user=
password=
port=

Ovirt-ha-agent version is:
ovirt-hosted-engine-ha-2.2.18-1.el7.noarch

Can you guide me in order to resolve this issue and to deploy the self-hosted 
engine ?
Where should I start from ?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D72UQFMNOEJJOPGDDAAUTM73ADLGOBR2/


[ovirt-users] Re: ovirt 4.2.7.1 fails to deploy hosted engine on GlusterFS

2018-12-09 Thread hunter86_bg
It seems that "Use existing" is not working.

I have tried multiple times to redeploy the engine and it always fails . Here 
is the last log from vdsm :

2018-12-09 15:56:40,269+0200 INFO  (JsonRpc (StompReactor)) 
[Broker.StompAdapter] Subscribe command received (stompreactor:132)
2018-12-09 15:56:40,310+0200 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call 
Host.ping2 succeeded in 0.00 seconds (__init__:573)
2018-12-09 15:56:40,317+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call 
Host.ping2 succeeded in 0.00 seconds (__init__:573)
2018-12-09 15:56:40,321+0200 INFO  (jsonrpc/4) [vdsm.api] START 
getStorageDomainInfo(sdUUID=u'143d800a-06e1-48b5-aa7c-21cb9f3a89a7', 
options=None) from=::1,4
7806, task_id=b3b5e7aa-998c-419e-9958-fe762dbf6d18 (api:46)
2018-12-09 15:56:40,321+0200 INFO  (jsonrpc/4) [storage.StorageDomain] 
sdUUID=143d800a-06e1-48b5-aa7c-21cb9f3a89a7 (fileSD:534)
2018-12-09 15:56:40,324+0200 INFO  (jsonrpc/4) [vdsm.api] FINISH 
getStorageDomainInfo return={'info': {'uuid': 
u'143d800a-06e1-48b5-aa7c-21cb9f3a89a7', 'vers
ion': '4', 'role': 'Master', 'remotePath': 'ovirt1:/engine', 'type': 
'GLUSTERFS', 'class': 'Data', 'pool': ['7845b386-fbb3-11e8-bfa8-00163e54fd43'], 
'name':
'hosted_storage'}} from=::1,47806, task_id=b3b5e7aa-998c-419e-9958-fe762dbf6d18 
(api:52)
2018-12-09 15:56:40,325+0200 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call 
StorageDomain.getInfo succeeded in 0.01 seconds (__init__:573)
2018-12-09 15:56:40,328+0200 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call 
Host.ping2 succeeded in 0.00 seconds (__init__:573)
2018-12-09 15:56:40,332+0200 INFO  (jsonrpc/0) [vdsm.api] START 
getStorageDomainInfo(sdUUID=u'143d800a-06e1-48b5-aa7c-21cb9f3a89a7', 
options=None) from=::1,4
7806, task_id=19f713e3-7387-4265-a821-f636b2415f42 (api:46)
2018-12-09 15:56:40,332+0200 INFO  (jsonrpc/0) [storage.StorageDomain] 
sdUUID=143d800a-06e1-48b5-aa7c-21cb9f3a89a7 (fileSD:534)
2018-12-09 15:56:40,336+0200 INFO  (jsonrpc/0) [vdsm.api] FINISH 
getStorageDomainInfo return={'info': {'uuid': 
u'143d800a-06e1-48b5-aa7c-21cb9f3a89a7', 'vers
ion': '4', 'role': 'Master', 'remotePath': 'ovirt1:/engine', 'type': 
'GLUSTERFS', 'class': 'Data', 'pool': ['7845b386-fbb3-11e8-bfa8-00163e54fd43'], 
'name':
'hosted_storage'}} from=::1,47806, task_id=19f713e3-7387-4265-a821-f636b2415f42 
(api:52)
2018-12-09 15:56:40,337+0200 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call 
StorageDomain.getInfo succeeded in 0.00 seconds (__init__:573)
2018-12-09 15:56:40,341+0200 INFO  (jsonrpc/5) [vdsm.api] START 
connectStorageServer(domType=7, spUUID=u'----', 
conList=[{u'i
d': u'e29cf818-5ee5-46e1-85c1-8aeefa33e95d', u'vfs_type': u'glusterfs', 
u'mnt_options': u'backup-volfile-servers=ovirt2:glarbiter', u'connection': 
u'ovirt1:/
engine', u'user': u'kvm'}], options=None) from=::1,47806, 
task_id=2d919607-796e-4528-9f54-4dc437eddada (api:46)
2018-12-09 15:56:40,345+0200 INFO  (jsonrpc/5) [vdsm.api] FINISH 
connectStorageServer return={'statuslist': [{'status': 0, 'id': 
u'e29cf818-5ee5-46e1-85c1-8a
eefa33e95d'}]} from=::1,47806, task_id=2d919607-796e-4528-9f54-4dc437eddada 
(api:52)
2018-12-09 15:56:40,345+0200 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call 
StoragePool.connectStorageServer succeeded in 0.01 seconds (__init__:573)
2018-12-09 15:56:40,348+0200 INFO  (jsonrpc/2) [vdsm.api] START 
getStorageDomainStats(sdUUID=u'143d800a-06e1-48b5-aa7c-21cb9f3a89a7', 
options=None) from=::1,
47806, task_id=47edd838-f7f5-448b-9b6a-a650460614ac (api:46)
2018-12-09 15:56:40,556+0200 INFO  (jsonrpc/2) [storage.StorageDomain] Removing 
remnants of deleted images [] (fileSD:734)
2018-12-09 15:56:40,557+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH 
getStorageDomainStats return={'stats': {'mdasize': 0, 'mdathreshold': True, 
'mdavalid': True
, 'diskfree': '103464173568', 'disktotal': '107313364992', 'mdafree': 0}} 
from=::1,47806, task_id=47edd838-f7f5-448b-9b6a-a650460614ac (api:52)
2018-12-09 15:56:40,558+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call 
StorageDomain.getStats succeeded in 0.21 seconds (__init__:573)
2018-12-09 15:56:40,564+0200 INFO  (jsonrpc/3) [vdsm.api] START 
prepareImage(sdUUID=u'143d800a-06e1-48b5-aa7c-21cb9f3a89a7', 
spUUID=u'---
-', imgUUID=u'dabcff49-56c0-4557-9c82-3df9e6c11991', 
leafUUID=u'6a4441f0-641e-49c0-a117-7913110874c6', allowIllegal=False) 
from=::1,47806, task_i
d=edb0b0fa-d528-426e-8250-04c0f7864224 (api:46)
2018-12-09 15:56:40,584+0200 INFO  (jsonrpc/3) [vdsm.api] FINISH prepareImage 
error=Volume does not exist: (u'6a4441f0-641e-49c0-a117-7913110874c6',) from=::
1,47806, task_id=edb0b0fa-d528-426e-8250-04c0f7864224 (api:50)
2018-12-09 15:56:40,584+0200 ERROR (jsonrpc/3) [storage.TaskManager.Task] 
(Task='edb0b0fa-d528-426e-8250-04c0f7864224') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
return fn(*args, **karg

[ovirt-users] Re: ovirt 4.2.7.1 fails to deploy hosted engine on GlusterFS

2018-11-27 Thread hunter86_bg
It seems that I have picked the wrong deploy method. Switching to 
"HyperConverged" -> "Use existing" fixes the error.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZQN7SISXBUPE4ZLBRR27UHYC3DF6LGZQ/


[ovirt-users] ovirt 4.2.7.1 fails to deploy hosted engine on GlusterFS

2018-11-27 Thread hunter86_bg
Hello Community,

I'm trying to deploy a hosted engine on GlusterFS which fails with the 
following error:
[ INFO ] TASK [Add glusterfs storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Failed 
to fetch Gluster Volume List]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "deprecations": 
[{"msg": "The 'ovirt_storage_domains' module is being renamed 
'ovirt_storage_domain'", "version": 2.8}], "msg": "Fault reason is \"Operation 
Failed\". Fault detail is \"[Failed to fetch Gluster Volume List]\". HTTP 
response code is 400."}

I have deployed GlusterFS via the HyperConverged  Option in Cockpit and the 
volumes are up and running.

[root@ovirt1 ~]# gluster volume status engine
Status of volume: engine
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick ovirt1:/gluster_bricks/engine/engine  49152 0  Y   26268
Brick ovirt2:/gluster_bricks/engine/engine  49152 0  Y   24116
Brick glarbiter:/gluster_bricks/engine/engi
ne  49152 0  Y   23526
Self-heal Daemon on localhost   N/A   N/AY   31229
Self-heal Daemon on ovirt2  N/A   N/AY   27097
Self-heal Daemon on glarbiter   N/A   N/AY   25888

Task Status of Volume engine
--
There are no active volume tasks

I'm using the following guide : 
https://ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
And on step 4 - Storage - I have defined it as follows:
Storage Type: Gluster
Storage Connection:  ovirt1.localdomain:/gluster_bricks/engine/
Mount Options: backup-volfile-servers=ovirt2.localdomain:glarbiter.localdomain

Can someone hint me where is the problem ?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZQZFTK6PXS3HKURHUASWVKMSCCQMMQRE/


[ovirt-users] Any plans for oVirt and SBD fencing (a.k.a. poison pill)

2018-11-08 Thread hunter86_bg
Hello Community,

Do you  have any idea if SBD fencing mechanism  will be implemented in the 
nearest future?
It will be nice to be able to resset a host just like corosync/pacemaker 
cluster implementation.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MWFACDD43VR65XGFVIJISHTUPNZ6VBXD/


[ovirt-users] Re: oVirt deployment with GlusterFS with replica 3 arbiter 1

2018-11-02 Thread hunter86_bg
For anyone who stumbles upon this - I have managed to test this setup with VMs 
under KVM.
It seems that all 3 systems need to be oVirt Nodes, while my 3rd node has 1 CPU 
and 1GB of RAM with 20GB of local storage - so most probably this setup is 
doable.
Still I need to test the HA once the host is down - the engine should be 
powered on another host and GlusterFS should be still running.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WHPBSDA7Z5W2WKFQ4N3XIOJBZHZQOMGS/


[ovirt-users] oVirt deployment with GlusterFS with replica 3 arbiter 1

2018-11-01 Thread hunter86_bg
Hello community,

I am planing to deploy a lab with oVirt and I would like to know if the design 
in my mind has any flaws.
Current plan is to deploy oVirt on 2 workstations with 24GB of RAM and 8 
threads each while the  3rd machine will be a VM in a public Cloud and will be 
used for GlusterFS arbiter.

Is it a problem if the arbiter is a  vanilla CentOS 7 . I am affraid that oVirt 
will require all GlusterFS nodes to be also oVirt Hosts, but the VM will lack 
the necessary resources and I would like to not use that 3rd system.
Is this idea possible ?

Thanks for reading this confusing post and I hope I managed to explain my idea.

Best Regards,
Strahil 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TKSNEUV5MELYKG7OXKBGEINQR3ZW5GJH/