[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread Strahil Nikolov via Users
You can't add the new volume as it contains the same data (UUID) as the old one 
, thus you need to detach the old one before adding the new one - of course 
this means downtime for all VMs on that storage.

As you see , downgrading is more simpler. For me v6.5 was working, while 
anything above (6.6+) was causing complete lockdown.  Also v7.0 was working, 
but it's supported  in oVirt 4.4.

Best Regards,
Strahil Nikolov

На 22 юни 2020 г. 7:21:15 GMT+03:00, C Williams  
написа:
>Another question
>
>What version could I downgrade to safely ? I am at 6.9 .
>
>Thank You For Your Help !!
>
>On Sun, Jun 21, 2020 at 11:38 PM Strahil Nikolov
>
>wrote:
>
>> You are definitely reading it wrong.
>> 1. I didn't create a new storage  domain ontop this new volume.
>> 2. I used cli
>>
>> Something like this  (in your case it should be 'replica 3'):
>> gluster volume create newvol replica 3 arbiter 1
>ovirt1:/new/brick/path
>> ovirt2:/new/brick/path ovirt3:/new/arbiter/brick/path
>> gluster volume start newvol
>>
>> #Detach oldvol from ovirt
>>
>> mount -t glusterfs  ovirt1:/oldvol /mnt/oldvol
>> mount -t glusterfs ovirt1:/newvol /mnt/newvol
>> cp -a /mnt/oldvol/* /mnt/newvol
>>
>> #Add only newvol as a storage domain in oVirt
>> #Import VMs
>>
>> I still think that you should downgrade your gluster packages!!!
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 22 юни 2020 г. 0:43:46 GMT+03:00, C Williams
>
>> написа:
>> >Strahil,
>> >
>> >It sounds like  you used a "System Managed Volume" for the new
>storage
>> >domain,is that correct?
>> >
>> >Thank You For Your Help !
>> >
>> >On Sun, Jun 21, 2020 at 5:40 PM C Williams 
>> >wrote:
>> >
>> >> Strahil,
>> >>
>> >> So you made another oVirt Storage Domain -- then copied the data
>with
>> >cp
>> >> -a from the failed volume to the new volume.
>> >>
>> >> At the root of the volume there will be the old domain folder id
>ex
>> >> 5fe3ad3f-2d21-404c-832e-4dc7318ca10d
>> >>  in my case. Did that cause issues with making the new domain
>since
>> >it is
>> >> the same folder id as the old one ?
>> >>
>> >> Thank You For Your Help !
>> >>
>> >> On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov
>> >
>> >> wrote:
>> >>
>> >>> In my situation I had  only the ovirt nodes.
>> >>>
>> >>> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams
>> >
>> >>> написа:
>> >>> >Strahil,
>> >>> >
>> >>> >So should I make the target volume on 3 bricks which do not have
>> >ovirt
>> >>> >--
>> >>> >just gluster ? In other words (3) Centos 7 hosts ?
>> >>> >
>> >>> >Thank You For Your Help !
>> >>> >
>> >>> >On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov
>> >
>> >>> >wrote:
>> >>> >
>> >>> >> I  created a fresh volume  (which is not an ovirt sgorage
>> >domain),
>> >>> >set
>> >>> >> the  original  storage  domain  in maintenance and detached 
>it.
>> >>> >> Then I 'cp  -a ' the data from the old to the new volume.
>Next,
>> >I
>> >>> >just
>> >>> >> added  the  new  storage domain (the old  one  was  a  kind 
>of a
>> >>> >> 'backup')  - pointing to the  new  volume  name.
>> >>> >>
>> >>> >> If  you  observe  issues ,  I would  recommend  you  to
>downgrade
>> >>> >> gluster  packages one node  at  a  time  . Then you might be
>able
>> >to
>> >>> >> restore  your  oVirt operations.
>> >>> >>
>> >>> >> Best  Regards,
>> >>> >> Strahil  Nikolov
>> >>> >>
>> >>> >> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams
>> >>> >
>> >>> >> написа:
>> >>> >> >Strahil,
>> >>> >> >
>> >>> >> >Thanks for the follow up !
>> >>> >> >
>> >>> >> >How did you copy the data to another volume ?
>> >>> >> >
>> >>> >> >I have set up another storage domain GLCLNEW1 with a new
>volume
>> >>> >imgnew1
>> >>> >> >.
>> >>> >> >How would you copy all of the data from the problematic
>domain
>> >GLCL3
>> >>> >> >with
>> >>> >> >volume images3 to GLCLNEW1 and volume imgnew1 and preserve
>all
>> >the
>> >>> >VMs,
>> >>> >> >VM
>> >>> >> >disks, settings, etc. ?
>> >>> >> >
>> >>> >> >Remember all of the regular ovirt disk copy, disk move, VM
>> >export
>> >>> >> >tools
>> >>> >> >are failing and my VMs and disks are trapped on domain GLCL3
>and
>> >>> >volume
>> >>> >> >images3 right now.
>> >>> >> >
>> >>> >> >Please let me know
>> >>> >> >
>> >>> >> >Thank You For Your Help !
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> >On Sun, Jun 21, 2020 at 8:27 AM Strahil Nikolov
>> >>> >
>> >>> >> >wrote:
>> >>> >> >
>> >>> >> >> Sorry to hear that.
>> >>> >> >> I can say that  for  me 6.5 was  working,  while 6.6 didn't
>> >and I
>> >>> >> >upgraded
>> >>> >> >> to 7.0 .
>> >>> >> >> In the ended ,  I have ended  with creating a  new fresh
>> >volume
>> >>> >and
>> >>> >> >> physically copying the data there,  then I detached the
>> >storage
>> >>> >> >domains and
>> >>> >> >> attached  to the  new ones  (which holded the old  data), 
>but
>> >I
>> >>> >> >could
>> >>> >> >> afford  the downtime.
>> >>> >> >> Also,  I can say that v7.0  (  but not 7.1  or anything
>later)
>> >>> >also
>> >>> >> >> worked  without the 

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Another question

What version could I downgrade to safely ? I am at 6.9 .

Thank You For Your Help !!

On Sun, Jun 21, 2020 at 11:38 PM Strahil Nikolov 
wrote:

> You are definitely reading it wrong.
> 1. I didn't create a new storage  domain ontop this new volume.
> 2. I used cli
>
> Something like this  (in your case it should be 'replica 3'):
> gluster volume create newvol replica 3 arbiter 1 ovirt1:/new/brick/path
> ovirt2:/new/brick/path ovirt3:/new/arbiter/brick/path
> gluster volume start newvol
>
> #Detach oldvol from ovirt
>
> mount -t glusterfs  ovirt1:/oldvol /mnt/oldvol
> mount -t glusterfs ovirt1:/newvol /mnt/newvol
> cp -a /mnt/oldvol/* /mnt/newvol
>
> #Add only newvol as a storage domain in oVirt
> #Import VMs
>
> I still think that you should downgrade your gluster packages!!!
>
> Best Regards,
> Strahil Nikolov
>
> На 22 юни 2020 г. 0:43:46 GMT+03:00, C Williams 
> написа:
> >Strahil,
> >
> >It sounds like  you used a "System Managed Volume" for the new storage
> >domain,is that correct?
> >
> >Thank You For Your Help !
> >
> >On Sun, Jun 21, 2020 at 5:40 PM C Williams 
> >wrote:
> >
> >> Strahil,
> >>
> >> So you made another oVirt Storage Domain -- then copied the data with
> >cp
> >> -a from the failed volume to the new volume.
> >>
> >> At the root of the volume there will be the old domain folder id ex
> >> 5fe3ad3f-2d21-404c-832e-4dc7318ca10d
> >>  in my case. Did that cause issues with making the new domain since
> >it is
> >> the same folder id as the old one ?
> >>
> >> Thank You For Your Help !
> >>
> >> On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov
> >
> >> wrote:
> >>
> >>> In my situation I had  only the ovirt nodes.
> >>>
> >>> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams
> >
> >>> написа:
> >>> >Strahil,
> >>> >
> >>> >So should I make the target volume on 3 bricks which do not have
> >ovirt
> >>> >--
> >>> >just gluster ? In other words (3) Centos 7 hosts ?
> >>> >
> >>> >Thank You For Your Help !
> >>> >
> >>> >On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov
> >
> >>> >wrote:
> >>> >
> >>> >> I  created a fresh volume  (which is not an ovirt sgorage
> >domain),
> >>> >set
> >>> >> the  original  storage  domain  in maintenance and detached  it.
> >>> >> Then I 'cp  -a ' the data from the old to the new volume. Next,
> >I
> >>> >just
> >>> >> added  the  new  storage domain (the old  one  was  a  kind  of a
> >>> >> 'backup')  - pointing to the  new  volume  name.
> >>> >>
> >>> >> If  you  observe  issues ,  I would  recommend  you  to downgrade
> >>> >> gluster  packages one node  at  a  time  . Then you might be able
> >to
> >>> >> restore  your  oVirt operations.
> >>> >>
> >>> >> Best  Regards,
> >>> >> Strahil  Nikolov
> >>> >>
> >>> >> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams
> >>> >
> >>> >> написа:
> >>> >> >Strahil,
> >>> >> >
> >>> >> >Thanks for the follow up !
> >>> >> >
> >>> >> >How did you copy the data to another volume ?
> >>> >> >
> >>> >> >I have set up another storage domain GLCLNEW1 with a new volume
> >>> >imgnew1
> >>> >> >.
> >>> >> >How would you copy all of the data from the problematic domain
> >GLCL3
> >>> >> >with
> >>> >> >volume images3 to GLCLNEW1 and volume imgnew1 and preserve all
> >the
> >>> >VMs,
> >>> >> >VM
> >>> >> >disks, settings, etc. ?
> >>> >> >
> >>> >> >Remember all of the regular ovirt disk copy, disk move, VM
> >export
> >>> >> >tools
> >>> >> >are failing and my VMs and disks are trapped on domain GLCL3 and
> >>> >volume
> >>> >> >images3 right now.
> >>> >> >
> >>> >> >Please let me know
> >>> >> >
> >>> >> >Thank You For Your Help !
> >>> >> >
> >>> >> >
> >>> >> >
> >>> >> >
> >>> >> >
> >>> >> >On Sun, Jun 21, 2020 at 8:27 AM Strahil Nikolov
> >>> >
> >>> >> >wrote:
> >>> >> >
> >>> >> >> Sorry to hear that.
> >>> >> >> I can say that  for  me 6.5 was  working,  while 6.6 didn't
> >and I
> >>> >> >upgraded
> >>> >> >> to 7.0 .
> >>> >> >> In the ended ,  I have ended  with creating a  new fresh
> >volume
> >>> >and
> >>> >> >> physically copying the data there,  then I detached the
> >storage
> >>> >> >domains and
> >>> >> >> attached  to the  new ones  (which holded the old  data),  but
> >I
> >>> >> >could
> >>> >> >> afford  the downtime.
> >>> >> >> Also,  I can say that v7.0  (  but not 7.1  or anything later)
> >>> >also
> >>> >> >> worked  without the ACL  issue,  but it causes some trouble
> >in
> >>> >oVirt
> >>> >> >- so
> >>> >> >> avoid that unless  you have no other options.
> >>> >> >>
> >>> >> >> Best Regards,
> >>> >> >> Strahil  Nikolov
> >>> >> >>
> >>> >> >>
> >>> >> >>
> >>> >> >>
> >>> >> >> На 21 юни 2020 г. 4:39:46 GMT+03:00, C Williams
> >>> >> >
> >>> >> >> написа:
> >>> >> >> >Hello,
> >>> >> >> >
> >>> >> >> >Upgrading diidn't help
> >>> >> >> >
> >>> >> >> >Still acl errors trying to use a Virtual Disk from a VM
> >>> >> >> >
> >>> >> >> >[root@ov06 bricks]# tail bricks-brick04-images3.log | grep
> >acl
> >>> >> >> >[2020-06-21 01:33:45.665888] I [MSGID: 139001]
> >>> >> >> 

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Here is what I did to make my volume

 gluster volume create imgnew2a replica 3 transport tcp
ov12.strg.srcle.com:/bricks/brick10/imgnew2a
ov13.strg.srcle.com:/bricks/brick11/imgnew2a ov14.strg.srcle.com:
/bricks/brick12/imgnew2a

on a host with the old volume I did this

 mount -t glusterfs yy.yy.24.18:/images3/ /mnt/test/  -- Old defective
volume
  974  ls /mnt/test
  975  mount
  976  mkdir /mnt/test1trg
  977  mount -t glusterfs yy.yy.24.24:/imgnew2a /mnt/test1trg    New
volume
  978  mount
  979  ls /mnt/test
  980  cp -a /mnt/test/* /mnt/test1trg/

When I tried to add the storage domain -- I got the errors described
previously about needing to clean out the old domain.

Thank You For Your Help !

On Mon, Jun 22, 2020 at 12:01 AM C Williams  wrote:

> Thanks Strahil
>
> I made a new gluster volume using only gluster CLI. Mounted the old volume
> and the new volume. Copied my data from the old volume to the new domain.
> Set the volume options like the old domain via the CLI. Tried to make a new
> storage domain using the paths to the new servers. However, oVirt
> complained that there was already a domain there and that I needed to clean
> it first. .
>
> What to do ?
>
> Thank You For Your Help !
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KR4ARQ5IN6LEZ2HCAPBEH5G6GA3LPRJ2/


[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Thanks Strahil

I made a new gluster volume using only gluster CLI. Mounted the old volume
and the new volume. Copied my data from the old volume to the new domain.
Set the volume options like the old domain via the CLI. Tried to make a new
storage domain using the paths to the new servers. However, oVirt
complained that there was already a domain there and that I needed to clean
it first. .

What to do ?

Thank You For Your Help !
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQJYO6CK4WQHQLM2FJ435MXH3H2BI6JL/


[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread Strahil Nikolov via Users
You are definitely reading it wrong.
1. I didn't create a new storage  domain ontop this new volume.
2. I used cli

Something like this  (in your case it should be 'replica 3'):
gluster volume create newvol replica 3 arbiter 1 ovirt1:/new/brick/path 
ovirt2:/new/brick/path ovirt3:/new/arbiter/brick/path
gluster volume start newvol

#Detach oldvol from ovirt

mount -t glusterfs  ovirt1:/oldvol /mnt/oldvol
mount -t glusterfs ovirt1:/newvol /mnt/newvol
cp -a /mnt/oldvol/* /mnt/newvol

#Add only newvol as a storage domain in oVirt
#Import VMs

I still think that you should downgrade your gluster packages!!!

Best Regards,
Strahil Nikolov

На 22 юни 2020 г. 0:43:46 GMT+03:00, C Williams  
написа:
>Strahil,
>
>It sounds like  you used a "System Managed Volume" for the new storage
>domain,is that correct?
>
>Thank You For Your Help !
>
>On Sun, Jun 21, 2020 at 5:40 PM C Williams 
>wrote:
>
>> Strahil,
>>
>> So you made another oVirt Storage Domain -- then copied the data with
>cp
>> -a from the failed volume to the new volume.
>>
>> At the root of the volume there will be the old domain folder id ex
>> 5fe3ad3f-2d21-404c-832e-4dc7318ca10d
>>  in my case. Did that cause issues with making the new domain since
>it is
>> the same folder id as the old one ?
>>
>> Thank You For Your Help !
>>
>> On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov
>
>> wrote:
>>
>>> In my situation I had  only the ovirt nodes.
>>>
>>> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams
>
>>> написа:
>>> >Strahil,
>>> >
>>> >So should I make the target volume on 3 bricks which do not have
>ovirt
>>> >--
>>> >just gluster ? In other words (3) Centos 7 hosts ?
>>> >
>>> >Thank You For Your Help !
>>> >
>>> >On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov
>
>>> >wrote:
>>> >
>>> >> I  created a fresh volume  (which is not an ovirt sgorage
>domain),
>>> >set
>>> >> the  original  storage  domain  in maintenance and detached  it.
>>> >> Then I 'cp  -a ' the data from the old to the new volume. Next, 
>I
>>> >just
>>> >> added  the  new  storage domain (the old  one  was  a  kind  of a
>>> >> 'backup')  - pointing to the  new  volume  name.
>>> >>
>>> >> If  you  observe  issues ,  I would  recommend  you  to downgrade
>>> >> gluster  packages one node  at  a  time  . Then you might be able
>to
>>> >> restore  your  oVirt operations.
>>> >>
>>> >> Best  Regards,
>>> >> Strahil  Nikolov
>>> >>
>>> >> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams
>>> >
>>> >> написа:
>>> >> >Strahil,
>>> >> >
>>> >> >Thanks for the follow up !
>>> >> >
>>> >> >How did you copy the data to another volume ?
>>> >> >
>>> >> >I have set up another storage domain GLCLNEW1 with a new volume
>>> >imgnew1
>>> >> >.
>>> >> >How would you copy all of the data from the problematic domain
>GLCL3
>>> >> >with
>>> >> >volume images3 to GLCLNEW1 and volume imgnew1 and preserve all
>the
>>> >VMs,
>>> >> >VM
>>> >> >disks, settings, etc. ?
>>> >> >
>>> >> >Remember all of the regular ovirt disk copy, disk move, VM
>export
>>> >> >tools
>>> >> >are failing and my VMs and disks are trapped on domain GLCL3 and
>>> >volume
>>> >> >images3 right now.
>>> >> >
>>> >> >Please let me know
>>> >> >
>>> >> >Thank You For Your Help !
>>> >> >
>>> >> >
>>> >> >
>>> >> >
>>> >> >
>>> >> >On Sun, Jun 21, 2020 at 8:27 AM Strahil Nikolov
>>> >
>>> >> >wrote:
>>> >> >
>>> >> >> Sorry to hear that.
>>> >> >> I can say that  for  me 6.5 was  working,  while 6.6 didn't
>and I
>>> >> >upgraded
>>> >> >> to 7.0 .
>>> >> >> In the ended ,  I have ended  with creating a  new fresh
>volume
>>> >and
>>> >> >> physically copying the data there,  then I detached the
>storage
>>> >> >domains and
>>> >> >> attached  to the  new ones  (which holded the old  data),  but
>I
>>> >> >could
>>> >> >> afford  the downtime.
>>> >> >> Also,  I can say that v7.0  (  but not 7.1  or anything later)
>>> >also
>>> >> >> worked  without the ACL  issue,  but it causes some trouble 
>in
>>> >oVirt
>>> >> >- so
>>> >> >> avoid that unless  you have no other options.
>>> >> >>
>>> >> >> Best Regards,
>>> >> >> Strahil  Nikolov
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> На 21 юни 2020 г. 4:39:46 GMT+03:00, C Williams
>>> >> >
>>> >> >> написа:
>>> >> >> >Hello,
>>> >> >> >
>>> >> >> >Upgrading diidn't help
>>> >> >> >
>>> >> >> >Still acl errors trying to use a Virtual Disk from a VM
>>> >> >> >
>>> >> >> >[root@ov06 bricks]# tail bricks-brick04-images3.log | grep
>acl
>>> >> >> >[2020-06-21 01:33:45.665888] I [MSGID: 139001]
>>> >> >> >[posix-acl.c:263:posix_acl_log_permit_denied]
>>> >> >0-images3-access-control:
>>> >> >> >client:
>>> >> >>
>>> >> >>
>>> >>
>>> >>
>>>
>>>
CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
>>> >> >> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>>> >> >> >req(uid:107,gid:107,perm:1,ngrps:3),
>>> >> >> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID,
>acl:-)
>>> >> >> >[Permission denied]
>>> >> >> 

[ovirt-users] Re: oVirt install questions

2020-06-21 Thread Edward Berger
 While ovirt can do what you would like it to do concerning a single user
interface, but with what you listed,
you're probably better off with just plain KVM/qemu and using virt-manager
for the interface.

Those memory/cpu requirements you listed are really tiny and I wouldn't
recommend even trying ovirt on such challenged systems.
I would specify at least 3 hosts for a gluster hyperconverged system, and a
spare available that can take over if one of the hosts dies.

I think a hosted engine installation VM wants 16GB RAM configured though
I've built older versions with 8GB RAM.
For modern VMs CentOS8 x86_64 recommends at least 2GB for a host. CentOS7
was OK with 1, CentOS6 maybe 512K.
The tendency is always increasing with updated OS versions.

My minimum ovirt systems were mostly 48GB 16core, but most are now 128GB
24core or more.

ovirt node ng is a prepackaged installer for an oVirt hypervisor/gluster
host, with its cockpit interface you can create and install the
hosted-engine VM for the user and admin web interface.  Its very good on
enterprise server hardware with lots of RAM,CPU, and DISKS.

On Sun, Jun 21, 2020 at 4:34 PM David White via Users 
wrote:

> I'm reading through all of the documentation at
> https://ovirt.org/documentation/, and am a bit overwhelmed with all of
> the different options for installing oVirt.
>
> My particular use case is that I'm looking for a way to manage VMs on
> multiple physical servers from 1 interface, and be able to deploy new VMs
> (or delete VMs) as necessary. Ideally, it would be great if I could move a
> VM from 1 host to a different host as well, particularly in the event that
> 1 host becomes degraded (bad HDD, bad processor, etc...)
>
> I'm trying to figure out what the difference is between an oVirt Node and
> the oVirt Engine, and how the engine differs from the Manager.
>
> I get the feeling that `Engine` = `Manager`. Same thing. I further think I
> understand the Engine to be essentially synonymous with a vCenter VM for
> ESXi hosts. Is this correct?
>
> If so, then what's the difference between the `self-hosted` vs the
> `stand-alone` engines?
>
> oVirt Engine requirements look to be a minimum of 4GB RAM and 2CPUs.
> oVirt Nodes, on the other hand, require only 2GB RAM.
> Is this a requirement just for the physical host, or is that how much RAM
> that each oVirt node process requires? In other words, if I have a physical
> host with 12GB of physical RAM, will I only be able to allocate 10GB of
> that to guest VMs? How much of that should I dedicated to the oVirt node
> processes?
>
> Can you install the oVirt Engine as a VM onto an existing oVirt Node? And
> then connect that same node to the Engine, once the Engine is installed?
>
> Reading through the documentation, it also sounds like oVirt Engine and
> oVirt Node require different versions of RHEL or CentOS.
> I read that the Engine for oVirt 4.4.0 requires RHEL (or CentOS) 8.2,
> whereas each Node requires 7.x (although I'll plan to just use the oVirt
> Node ISO).
>
> I'm also wondering about storage.
> I don't really like the idea of using local storage, but a single NFS
> server would also be a single point of failure, and Gluster would be too
> expensive to deploy, so at this point, I'm leaning towards using local
> storage.
>
> Any advice or clarity would be greatly appreciated.
>
> Thanks,
> David
>
>
> Sent with ProtonMail  Secure Email.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGHCN356DXJEDR5FJ7SXSBHBF5FYRWIN/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FOUZFFDRVNISJNU4MMWBYY3NBPIPLQAM/


[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Strahil,

It sounds like  you used a "System Managed Volume" for the new storage
domain,is that correct?

Thank You For Your Help !

On Sun, Jun 21, 2020 at 5:40 PM C Williams  wrote:

> Strahil,
>
> So you made another oVirt Storage Domain -- then copied the data with cp
> -a from the failed volume to the new volume.
>
> At the root of the volume there will be the old domain folder id ex
> 5fe3ad3f-2d21-404c-832e-4dc7318ca10d
>  in my case. Did that cause issues with making the new domain since it is
> the same folder id as the old one ?
>
> Thank You For Your Help !
>
> On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov 
> wrote:
>
>> In my situation I had  only the ovirt nodes.
>>
>> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams 
>> написа:
>> >Strahil,
>> >
>> >So should I make the target volume on 3 bricks which do not have ovirt
>> >--
>> >just gluster ? In other words (3) Centos 7 hosts ?
>> >
>> >Thank You For Your Help !
>> >
>> >On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov 
>> >wrote:
>> >
>> >> I  created a fresh volume  (which is not an ovirt sgorage domain),
>> >set
>> >> the  original  storage  domain  in maintenance and detached  it.
>> >> Then I 'cp  -a ' the data from the old to the new volume. Next,  I
>> >just
>> >> added  the  new  storage domain (the old  one  was  a  kind  of a
>> >> 'backup')  - pointing to the  new  volume  name.
>> >>
>> >> If  you  observe  issues ,  I would  recommend  you  to downgrade
>> >> gluster  packages one node  at  a  time  . Then you might be able to
>> >> restore  your  oVirt operations.
>> >>
>> >> Best  Regards,
>> >> Strahil  Nikolov
>> >>
>> >> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams
>> >
>> >> написа:
>> >> >Strahil,
>> >> >
>> >> >Thanks for the follow up !
>> >> >
>> >> >How did you copy the data to another volume ?
>> >> >
>> >> >I have set up another storage domain GLCLNEW1 with a new volume
>> >imgnew1
>> >> >.
>> >> >How would you copy all of the data from the problematic domain GLCL3
>> >> >with
>> >> >volume images3 to GLCLNEW1 and volume imgnew1 and preserve all the
>> >VMs,
>> >> >VM
>> >> >disks, settings, etc. ?
>> >> >
>> >> >Remember all of the regular ovirt disk copy, disk move, VM export
>> >> >tools
>> >> >are failing and my VMs and disks are trapped on domain GLCL3 and
>> >volume
>> >> >images3 right now.
>> >> >
>> >> >Please let me know
>> >> >
>> >> >Thank You For Your Help !
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >On Sun, Jun 21, 2020 at 8:27 AM Strahil Nikolov
>> >
>> >> >wrote:
>> >> >
>> >> >> Sorry to hear that.
>> >> >> I can say that  for  me 6.5 was  working,  while 6.6 didn't and I
>> >> >upgraded
>> >> >> to 7.0 .
>> >> >> In the ended ,  I have ended  with creating a  new fresh volume
>> >and
>> >> >> physically copying the data there,  then I detached the storage
>> >> >domains and
>> >> >> attached  to the  new ones  (which holded the old  data),  but I
>> >> >could
>> >> >> afford  the downtime.
>> >> >> Also,  I can say that v7.0  (  but not 7.1  or anything later)
>> >also
>> >> >> worked  without the ACL  issue,  but it causes some trouble  in
>> >oVirt
>> >> >- so
>> >> >> avoid that unless  you have no other options.
>> >> >>
>> >> >> Best Regards,
>> >> >> Strahil  Nikolov
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> На 21 юни 2020 г. 4:39:46 GMT+03:00, C Williams
>> >> >
>> >> >> написа:
>> >> >> >Hello,
>> >> >> >
>> >> >> >Upgrading diidn't help
>> >> >> >
>> >> >> >Still acl errors trying to use a Virtual Disk from a VM
>> >> >> >
>> >> >> >[root@ov06 bricks]# tail bricks-brick04-images3.log | grep acl
>> >> >> >[2020-06-21 01:33:45.665888] I [MSGID: 139001]
>> >> >> >[posix-acl.c:263:posix_acl_log_permit_denied]
>> >> >0-images3-access-control:
>> >> >> >client:
>> >> >>
>> >> >>
>> >>
>> >>
>>
>> >>>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
>> >> >> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> >> >> >req(uid:107,gid:107,perm:1,ngrps:3),
>> >> >> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>> >> >> >[Permission denied]
>> >> >> >The message "I [MSGID: 139001]
>> >> >> >[posix-acl.c:263:posix_acl_log_permit_denied]
>> >> >0-images3-access-control:
>> >> >> >client:
>> >> >>
>> >> >>
>> >>
>> >>
>>
>> >>>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
>> >> >> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> >> >> >req(uid:107,gid:107,perm:1,ngrps:3),
>> >> >> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>> >> >> >[Permission denied]" repeated 2 times between [2020-06-21
>> >> >> >01:33:45.665888]
>> >> >> >and [2020-06-21 01:33:45.806779]
>> >> >> >
>> >> >> >Thank You For Your Help !
>> >> >> >
>> >> >> >On Sat, Jun 20, 2020 at 8:59 PM C Williams
>> >
>> >> >> >wrote:
>> >> >> >
>> >> >> >> Hello,
>> >> >> >>
>> >> >> >> Based on the situation, I am planning to upgrade the 3 

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Strahil,

So you made another oVirt Storage Domain -- then copied the data with cp -a
from the failed volume to the new volume.

At the root of the volume there will be the old domain folder id ex
5fe3ad3f-2d21-404c-832e-4dc7318ca10d
 in my case. Did that cause issues with making the new domain since it is
the same folder id as the old one ?

Thank You For Your Help !

On Sun, Jun 21, 2020 at 5:18 PM Strahil Nikolov 
wrote:

> In my situation I had  only the ovirt nodes.
>
> На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams 
> написа:
> >Strahil,
> >
> >So should I make the target volume on 3 bricks which do not have ovirt
> >--
> >just gluster ? In other words (3) Centos 7 hosts ?
> >
> >Thank You For Your Help !
> >
> >On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov 
> >wrote:
> >
> >> I  created a fresh volume  (which is not an ovirt sgorage domain),
> >set
> >> the  original  storage  domain  in maintenance and detached  it.
> >> Then I 'cp  -a ' the data from the old to the new volume. Next,  I
> >just
> >> added  the  new  storage domain (the old  one  was  a  kind  of a
> >> 'backup')  - pointing to the  new  volume  name.
> >>
> >> If  you  observe  issues ,  I would  recommend  you  to downgrade
> >> gluster  packages one node  at  a  time  . Then you might be able to
> >> restore  your  oVirt operations.
> >>
> >> Best  Regards,
> >> Strahil  Nikolov
> >>
> >> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams
> >
> >> написа:
> >> >Strahil,
> >> >
> >> >Thanks for the follow up !
> >> >
> >> >How did you copy the data to another volume ?
> >> >
> >> >I have set up another storage domain GLCLNEW1 with a new volume
> >imgnew1
> >> >.
> >> >How would you copy all of the data from the problematic domain GLCL3
> >> >with
> >> >volume images3 to GLCLNEW1 and volume imgnew1 and preserve all the
> >VMs,
> >> >VM
> >> >disks, settings, etc. ?
> >> >
> >> >Remember all of the regular ovirt disk copy, disk move, VM export
> >> >tools
> >> >are failing and my VMs and disks are trapped on domain GLCL3 and
> >volume
> >> >images3 right now.
> >> >
> >> >Please let me know
> >> >
> >> >Thank You For Your Help !
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >On Sun, Jun 21, 2020 at 8:27 AM Strahil Nikolov
> >
> >> >wrote:
> >> >
> >> >> Sorry to hear that.
> >> >> I can say that  for  me 6.5 was  working,  while 6.6 didn't and I
> >> >upgraded
> >> >> to 7.0 .
> >> >> In the ended ,  I have ended  with creating a  new fresh volume
> >and
> >> >> physically copying the data there,  then I detached the storage
> >> >domains and
> >> >> attached  to the  new ones  (which holded the old  data),  but I
> >> >could
> >> >> afford  the downtime.
> >> >> Also,  I can say that v7.0  (  but not 7.1  or anything later)
> >also
> >> >> worked  without the ACL  issue,  but it causes some trouble  in
> >oVirt
> >> >- so
> >> >> avoid that unless  you have no other options.
> >> >>
> >> >> Best Regards,
> >> >> Strahil  Nikolov
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> На 21 юни 2020 г. 4:39:46 GMT+03:00, C Williams
> >> >
> >> >> написа:
> >> >> >Hello,
> >> >> >
> >> >> >Upgrading diidn't help
> >> >> >
> >> >> >Still acl errors trying to use a Virtual Disk from a VM
> >> >> >
> >> >> >[root@ov06 bricks]# tail bricks-brick04-images3.log | grep acl
> >> >> >[2020-06-21 01:33:45.665888] I [MSGID: 139001]
> >> >> >[posix-acl.c:263:posix_acl_log_permit_denied]
> >> >0-images3-access-control:
> >> >> >client:
> >> >>
> >> >>
> >>
> >>
>
> >>>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
> >> >> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >> >> >req(uid:107,gid:107,perm:1,ngrps:3),
> >> >> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >> >> >[Permission denied]
> >> >> >The message "I [MSGID: 139001]
> >> >> >[posix-acl.c:263:posix_acl_log_permit_denied]
> >> >0-images3-access-control:
> >> >> >client:
> >> >>
> >> >>
> >>
> >>
>
> >>>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
> >> >> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >> >> >req(uid:107,gid:107,perm:1,ngrps:3),
> >> >> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >> >> >[Permission denied]" repeated 2 times between [2020-06-21
> >> >> >01:33:45.665888]
> >> >> >and [2020-06-21 01:33:45.806779]
> >> >> >
> >> >> >Thank You For Your Help !
> >> >> >
> >> >> >On Sat, Jun 20, 2020 at 8:59 PM C Williams
> >
> >> >> >wrote:
> >> >> >
> >> >> >> Hello,
> >> >> >>
> >> >> >> Based on the situation, I am planning to upgrade the 3 affected
> >> >> >hosts.
> >> >> >>
> >> >> >> My reasoning is that the hosts/bricks were attached to 6.9 at
> >one
> >> >> >time.
> >> >> >>
> >> >> >> Thanks For Your Help !
> >> >> >>
> >> >> >> On Sat, Jun 20, 2020 at 8:38 PM C Williams
> >> >
> >> >> >> wrote:
> >> >> >>
> >> >> >>> Strahil,
> >> >> >>>
> >> >> >>> The gluster version on the 

[ovirt-users] Re: oVirt install questions

2020-06-21 Thread Strahil Nikolov via Users


На 21 юни 2020 г. 23:26:32 GMT+03:00, David White via Users  
написа:
>I'm reading through all of the documentation at
>https://ovirt.org/documentation/, and am a bit overwhelmed with all of
>the different options for installing oVirt. 
>
>My particular use case is that I'm looking for a way to manage VMs on
>multiple physical servers from 1 interface, and be able to deploy new
>VMs (or delete VMs) as necessary. Ideally, it would be great if I could
>move a VM from 1 host to a different host as well, particularly in the
>event that 1 host becomes degraded (bad HDD, bad processor, etc...)
>
>I'm trying to figure out what the difference is between an oVirt Node
>and the oVirt Engine, and how the engine differs from the Manager.
>
>I get the feeling that `Engine` = `Manager`. Same thing. I further
>think I understand the Engine to be essentially synonymous with a
>vCenter VM for ESXi hosts. Is this correct?
Generally speaking they are  interchangeable, but usually the engine is the 
deamon that is running inside the manager.
Correct, just  like  in VMware  - you can have your Vcenter  in a VM on the 
esxi or  you can host it on a separate physical server.

>If so, then what's the difference between the `self-hosted` vs the
>`stand-alone` engines?
Self  hosted  -> the manager  is managing the host that is hosting it ,  while 
standalone is on a non-managed location - like  standalone  KVM VM,  VMware VM 
or physical server.
>oVirt Engine requirements look to be a minimum of 4GB RAM and 2CPUs.
>oVirt Nodes, on the other hand, require only 2GB RAM.
>Is this a requirement just for the physical host, or is that how much
>RAM that each oVirt node process requires? In other words, if I have a
>physical host with 12GB of physical RAM, will I only be able to
>allocate 10GB of that to guest VMs? How much of that should I dedicated
>to the oVirt node processes?

oVirt Node -> a  kind of  ready to go appliance  that  has  only 1  purpose  - 
Hosting VMs.  It has  an advantage that you can easily rollback updates. 
Drawback  - hard to customize  (for example custom  drivers).
It  will be nice to have  as much memory as  possible, but depends  on the 
ammount and type  of VMs you plan to host on it.

>Can you install the oVirt Engine as a VM onto an existing oVirt Node?
>And then connect that same node to the Engine, once the Engine is
>installed?
That's  what the Hosted-Engine deployment  is doing for you. The  easiest  way 
is to use the cockpit  method.

>Reading through the documentation, it also sounds like oVirt Engine and
>oVirt Node require different versions of RHEL or CentOS.
>I read that the Engine for oVirt 4.4.0 requires RHEL (or CentOS) 8.2,
>whereas each Node requires 7.x (although I'll plan to just use the
>oVirt Node ISO).
oVirt 4.3 (node, engine) are based on  CentOS/EL  7.X
oVirt 4.4 (node,  engine)  are based on CentOS/EL 8.
oVirt 4.4 still needs  some polishing,  but keep in mind that migration from 
4.3 to  4.4  requires  redeploy (real reinstall).

>I'm also wondering about storage.
>I don't really like the idea of using local storage, but a single NFS
>server would also be a single point of failure, and Gluster would be
>too expensive to deploy, so at this point, I'm leaning towards using
>local storage.

 It's up to you.

>Any advice or clarity would be greatly appreciated.
>
>Thanks,
>David
>
>Sent with ProtonMail Secure Email.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3MKFQLYYN4IJJ53XDMOJNV6JA4M3D6IL/


[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread Strahil Nikolov via Users
In my situation I had  only the ovirt nodes.

На 21 юни 2020 г. 22:43:04 GMT+03:00, C Williams  
написа:
>Strahil,
>
>So should I make the target volume on 3 bricks which do not have ovirt
>--
>just gluster ? In other words (3) Centos 7 hosts ?
>
>Thank You For Your Help !
>
>On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov 
>wrote:
>
>> I  created a fresh volume  (which is not an ovirt sgorage domain), 
>set
>> the  original  storage  domain  in maintenance and detached  it.
>> Then I 'cp  -a ' the data from the old to the new volume. Next,  I
>just
>> added  the  new  storage domain (the old  one  was  a  kind  of a
>> 'backup')  - pointing to the  new  volume  name.
>>
>> If  you  observe  issues ,  I would  recommend  you  to downgrade
>> gluster  packages one node  at  a  time  . Then you might be able to
>> restore  your  oVirt operations.
>>
>> Best  Regards,
>> Strahil  Nikolov
>>
>> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams
>
>> написа:
>> >Strahil,
>> >
>> >Thanks for the follow up !
>> >
>> >How did you copy the data to another volume ?
>> >
>> >I have set up another storage domain GLCLNEW1 with a new volume
>imgnew1
>> >.
>> >How would you copy all of the data from the problematic domain GLCL3
>> >with
>> >volume images3 to GLCLNEW1 and volume imgnew1 and preserve all the
>VMs,
>> >VM
>> >disks, settings, etc. ?
>> >
>> >Remember all of the regular ovirt disk copy, disk move, VM export
>> >tools
>> >are failing and my VMs and disks are trapped on domain GLCL3 and
>volume
>> >images3 right now.
>> >
>> >Please let me know
>> >
>> >Thank You For Your Help !
>> >
>> >
>> >
>> >
>> >
>> >On Sun, Jun 21, 2020 at 8:27 AM Strahil Nikolov
>
>> >wrote:
>> >
>> >> Sorry to hear that.
>> >> I can say that  for  me 6.5 was  working,  while 6.6 didn't and I
>> >upgraded
>> >> to 7.0 .
>> >> In the ended ,  I have ended  with creating a  new fresh volume
>and
>> >> physically copying the data there,  then I detached the storage
>> >domains and
>> >> attached  to the  new ones  (which holded the old  data),  but I
>> >could
>> >> afford  the downtime.
>> >> Also,  I can say that v7.0  (  but not 7.1  or anything later) 
>also
>> >> worked  without the ACL  issue,  but it causes some trouble  in
>oVirt
>> >- so
>> >> avoid that unless  you have no other options.
>> >>
>> >> Best Regards,
>> >> Strahil  Nikolov
>> >>
>> >>
>> >>
>> >>
>> >> На 21 юни 2020 г. 4:39:46 GMT+03:00, C Williams
>> >
>> >> написа:
>> >> >Hello,
>> >> >
>> >> >Upgrading diidn't help
>> >> >
>> >> >Still acl errors trying to use a Virtual Disk from a VM
>> >> >
>> >> >[root@ov06 bricks]# tail bricks-brick04-images3.log | grep acl
>> >> >[2020-06-21 01:33:45.665888] I [MSGID: 139001]
>> >> >[posix-acl.c:263:posix_acl_log_permit_denied]
>> >0-images3-access-control:
>> >> >client:
>> >>
>> >>
>>
>>
>>>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
>> >> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> >> >req(uid:107,gid:107,perm:1,ngrps:3),
>> >> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>> >> >[Permission denied]
>> >> >The message "I [MSGID: 139001]
>> >> >[posix-acl.c:263:posix_acl_log_permit_denied]
>> >0-images3-access-control:
>> >> >client:
>> >>
>> >>
>>
>>
>>>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
>> >> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> >> >req(uid:107,gid:107,perm:1,ngrps:3),
>> >> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>> >> >[Permission denied]" repeated 2 times between [2020-06-21
>> >> >01:33:45.665888]
>> >> >and [2020-06-21 01:33:45.806779]
>> >> >
>> >> >Thank You For Your Help !
>> >> >
>> >> >On Sat, Jun 20, 2020 at 8:59 PM C Williams
>
>> >> >wrote:
>> >> >
>> >> >> Hello,
>> >> >>
>> >> >> Based on the situation, I am planning to upgrade the 3 affected
>> >> >hosts.
>> >> >>
>> >> >> My reasoning is that the hosts/bricks were attached to 6.9 at
>one
>> >> >time.
>> >> >>
>> >> >> Thanks For Your Help !
>> >> >>
>> >> >> On Sat, Jun 20, 2020 at 8:38 PM C Williams
>> >
>> >> >> wrote:
>> >> >>
>> >> >>> Strahil,
>> >> >>>
>> >> >>> The gluster version on the current 3 gluster hosts is  6.7
>(last
>> >> >update
>> >> >>> 2/26). These 3 hosts provide 1 brick each for the replica 3
>> >volume.
>> >> >>>
>> >> >>> Earlier I had tried to add 6 additional hosts to the cluster.
>> >Those
>> >> >new
>> >> >>> hosts were 6.9 gluster.
>> >> >>>
>> >> >>> I attempted to make a new separate volume with 3 bricks
>provided
>> >by
>> >> >the 3
>> >> >>> new gluster  6.9 hosts. After having many errors from the
>oVirt
>> >> >interface,
>> >> >>> I gave up and removed the 6 new hosts from the cluster. That
>is
>> >> >where the
>> >> >>> problems started. The intent was to expand the gluster cluster
>> >while
>> >> >making
>> >> >>> 2 new volumes for that cluster. The ovirt compute cluster
>would
>> >> 

[ovirt-users] oVirt install questions

2020-06-21 Thread David White via Users
I'm reading through all of the documentation at 
https://ovirt.org/documentation/, and am a bit overwhelmed with all of the 
different options for installing oVirt. 

My particular use case is that I'm looking for a way to manage VMs on multiple 
physical servers from 1 interface, and be able to deploy new VMs (or delete 
VMs) as necessary. Ideally, it would be great if I could move a VM from 1 host 
to a different host as well, particularly in the event that 1 host becomes 
degraded (bad HDD, bad processor, etc...)

I'm trying to figure out what the difference is between an oVirt Node and the 
oVirt Engine, and how the engine differs from the Manager.

I get the feeling that `Engine` = `Manager`. Same thing. I further think I 
understand the Engine to be essentially synonymous with a vCenter VM for ESXi 
hosts. Is this correct?

If so, then what's the difference between the `self-hosted` vs the 
`stand-alone` engines?

oVirt Engine requirements look to be a minimum of 4GB RAM and 2CPUs.
oVirt Nodes, on the other hand, require only 2GB RAM.
Is this a requirement just for the physical host, or is that how much RAM that 
each oVirt node process requires? In other words, if I have a physical host 
with 12GB of physical RAM, will I only be able to allocate 10GB of that to 
guest VMs? How much of that should I dedicated to the oVirt node processes?

Can you install the oVirt Engine as a VM onto an existing oVirt Node? And then 
connect that same node to the Engine, once the Engine is installed?

Reading through the documentation, it also sounds like oVirt Engine and oVirt 
Node require different versions of RHEL or CentOS.
I read that the Engine for oVirt 4.4.0 requires RHEL (or CentOS) 8.2, whereas 
each Node requires 7.x (although I'll plan to just use the oVirt Node ISO).

I'm also wondering about storage.
I don't really like the idea of using local storage, but a single NFS server 
would also be a single point of failure, and Gluster would be too expensive to 
deploy, so at this point, I'm leaning towards using local storage.

Any advice or clarity would be greatly appreciated.

Thanks,
David

Sent with ProtonMail Secure Email.

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RGHCN356DXJEDR5FJ7SXSBHBF5FYRWIN/


[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Strahil,

So should I make the target volume on 3 bricks which do not have ovirt --
just gluster ? In other words (3) Centos 7 hosts ?

Thank You For Your Help !

On Sun, Jun 21, 2020 at 3:08 PM Strahil Nikolov 
wrote:

> I  created a fresh volume  (which is not an ovirt sgorage domain),  set
> the  original  storage  domain  in maintenance and detached  it.
> Then I 'cp  -a ' the data from the old to the new volume. Next,  I just
> added  the  new  storage domain (the old  one  was  a  kind  of a
> 'backup')  - pointing to the  new  volume  name.
>
> If  you  observe  issues ,  I would  recommend  you  to downgrade
> gluster  packages one node  at  a  time  . Then you might be able to
> restore  your  oVirt operations.
>
> Best  Regards,
> Strahil  Nikolov
>
> На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams 
> написа:
> >Strahil,
> >
> >Thanks for the follow up !
> >
> >How did you copy the data to another volume ?
> >
> >I have set up another storage domain GLCLNEW1 with a new volume imgnew1
> >.
> >How would you copy all of the data from the problematic domain GLCL3
> >with
> >volume images3 to GLCLNEW1 and volume imgnew1 and preserve all the VMs,
> >VM
> >disks, settings, etc. ?
> >
> >Remember all of the regular ovirt disk copy, disk move, VM export
> >tools
> >are failing and my VMs and disks are trapped on domain GLCL3 and volume
> >images3 right now.
> >
> >Please let me know
> >
> >Thank You For Your Help !
> >
> >
> >
> >
> >
> >On Sun, Jun 21, 2020 at 8:27 AM Strahil Nikolov 
> >wrote:
> >
> >> Sorry to hear that.
> >> I can say that  for  me 6.5 was  working,  while 6.6 didn't and I
> >upgraded
> >> to 7.0 .
> >> In the ended ,  I have ended  with creating a  new fresh volume and
> >> physically copying the data there,  then I detached the storage
> >domains and
> >> attached  to the  new ones  (which holded the old  data),  but I
> >could
> >> afford  the downtime.
> >> Also,  I can say that v7.0  (  but not 7.1  or anything later)  also
> >> worked  without the ACL  issue,  but it causes some trouble  in oVirt
> >- so
> >> avoid that unless  you have no other options.
> >>
> >> Best Regards,
> >> Strahil  Nikolov
> >>
> >>
> >>
> >>
> >> На 21 юни 2020 г. 4:39:46 GMT+03:00, C Williams
> >
> >> написа:
> >> >Hello,
> >> >
> >> >Upgrading diidn't help
> >> >
> >> >Still acl errors trying to use a Virtual Disk from a VM
> >> >
> >> >[root@ov06 bricks]# tail bricks-brick04-images3.log | grep acl
> >> >[2020-06-21 01:33:45.665888] I [MSGID: 139001]
> >> >[posix-acl.c:263:posix_acl_log_permit_denied]
> >0-images3-access-control:
> >> >client:
> >>
> >>
>
> >>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
> >> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >> >req(uid:107,gid:107,perm:1,ngrps:3),
> >> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >> >[Permission denied]
> >> >The message "I [MSGID: 139001]
> >> >[posix-acl.c:263:posix_acl_log_permit_denied]
> >0-images3-access-control:
> >> >client:
> >>
> >>
>
> >>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
> >> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >> >req(uid:107,gid:107,perm:1,ngrps:3),
> >> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >> >[Permission denied]" repeated 2 times between [2020-06-21
> >> >01:33:45.665888]
> >> >and [2020-06-21 01:33:45.806779]
> >> >
> >> >Thank You For Your Help !
> >> >
> >> >On Sat, Jun 20, 2020 at 8:59 PM C Williams 
> >> >wrote:
> >> >
> >> >> Hello,
> >> >>
> >> >> Based on the situation, I am planning to upgrade the 3 affected
> >> >hosts.
> >> >>
> >> >> My reasoning is that the hosts/bricks were attached to 6.9 at one
> >> >time.
> >> >>
> >> >> Thanks For Your Help !
> >> >>
> >> >> On Sat, Jun 20, 2020 at 8:38 PM C Williams
> >
> >> >> wrote:
> >> >>
> >> >>> Strahil,
> >> >>>
> >> >>> The gluster version on the current 3 gluster hosts is  6.7 (last
> >> >update
> >> >>> 2/26). These 3 hosts provide 1 brick each for the replica 3
> >volume.
> >> >>>
> >> >>> Earlier I had tried to add 6 additional hosts to the cluster.
> >Those
> >> >new
> >> >>> hosts were 6.9 gluster.
> >> >>>
> >> >>> I attempted to make a new separate volume with 3 bricks provided
> >by
> >> >the 3
> >> >>> new gluster  6.9 hosts. After having many errors from the oVirt
> >> >interface,
> >> >>> I gave up and removed the 6 new hosts from the cluster. That is
> >> >where the
> >> >>> problems started. The intent was to expand the gluster cluster
> >while
> >> >making
> >> >>> 2 new volumes for that cluster. The ovirt compute cluster would
> >> >allow for
> >> >>> efficient VM migration between 9 hosts -- while having separate
> >> >gluster
> >> >>> volumes for safety purposes.
> >> >>>
> >> >>> Looking at the brick logs, I see where there are acl errors
> >starting
> >> >from
> >> >>> the time of the removal of the 6 new 

[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread Strahil Nikolov via Users
I  created a fresh volume  (which is not an ovirt sgorage domain),  set  the  
original  storage  domain  in maintenance and detached  it.
Then I 'cp  -a ' the data from the old to the new volume. Next,  I just  added  
the  new  storage domain (the old  one  was  a  kind  of a 'backup')  - 
pointing to the  new  volume  name.

If  you  observe  issues ,  I would  recommend  you  to downgrade  gluster  
packages one node  at  a  time  . Then you might be able to restore  your  
oVirt operations.

Best  Regards,
Strahil  Nikolov

На 21 юни 2020 г. 18:01:31 GMT+03:00, C Williams  
написа:
>Strahil,
>
>Thanks for the follow up !
>
>How did you copy the data to another volume ?
>
>I have set up another storage domain GLCLNEW1 with a new volume imgnew1
>.
>How would you copy all of the data from the problematic domain GLCL3
>with
>volume images3 to GLCLNEW1 and volume imgnew1 and preserve all the VMs,
>VM
>disks, settings, etc. ?
>
>Remember all of the regular ovirt disk copy, disk move, VM export 
>tools
>are failing and my VMs and disks are trapped on domain GLCL3 and volume
>images3 right now.
>
>Please let me know
>
>Thank You For Your Help !
>
>
>
>
>
>On Sun, Jun 21, 2020 at 8:27 AM Strahil Nikolov 
>wrote:
>
>> Sorry to hear that.
>> I can say that  for  me 6.5 was  working,  while 6.6 didn't and I
>upgraded
>> to 7.0 .
>> In the ended ,  I have ended  with creating a  new fresh volume and
>> physically copying the data there,  then I detached the storage
>domains and
>> attached  to the  new ones  (which holded the old  data),  but I
>could
>> afford  the downtime.
>> Also,  I can say that v7.0  (  but not 7.1  or anything later)  also
>> worked  without the ACL  issue,  but it causes some trouble  in oVirt
>- so
>> avoid that unless  you have no other options.
>>
>> Best Regards,
>> Strahil  Nikolov
>>
>>
>>
>>
>> На 21 юни 2020 г. 4:39:46 GMT+03:00, C Williams
>
>> написа:
>> >Hello,
>> >
>> >Upgrading diidn't help
>> >
>> >Still acl errors trying to use a Virtual Disk from a VM
>> >
>> >[root@ov06 bricks]# tail bricks-brick04-images3.log | grep acl
>> >[2020-06-21 01:33:45.665888] I [MSGID: 139001]
>> >[posix-acl.c:263:posix_acl_log_permit_denied]
>0-images3-access-control:
>> >client:
>>
>>
>>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
>> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> >req(uid:107,gid:107,perm:1,ngrps:3),
>> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>> >[Permission denied]
>> >The message "I [MSGID: 139001]
>> >[posix-acl.c:263:posix_acl_log_permit_denied]
>0-images3-access-control:
>> >client:
>>
>>
>>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
>> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>> >req(uid:107,gid:107,perm:1,ngrps:3),
>> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>> >[Permission denied]" repeated 2 times between [2020-06-21
>> >01:33:45.665888]
>> >and [2020-06-21 01:33:45.806779]
>> >
>> >Thank You For Your Help !
>> >
>> >On Sat, Jun 20, 2020 at 8:59 PM C Williams 
>> >wrote:
>> >
>> >> Hello,
>> >>
>> >> Based on the situation, I am planning to upgrade the 3 affected
>> >hosts.
>> >>
>> >> My reasoning is that the hosts/bricks were attached to 6.9 at one
>> >time.
>> >>
>> >> Thanks For Your Help !
>> >>
>> >> On Sat, Jun 20, 2020 at 8:38 PM C Williams
>
>> >> wrote:
>> >>
>> >>> Strahil,
>> >>>
>> >>> The gluster version on the current 3 gluster hosts is  6.7 (last
>> >update
>> >>> 2/26). These 3 hosts provide 1 brick each for the replica 3
>volume.
>> >>>
>> >>> Earlier I had tried to add 6 additional hosts to the cluster.
>Those
>> >new
>> >>> hosts were 6.9 gluster.
>> >>>
>> >>> I attempted to make a new separate volume with 3 bricks provided
>by
>> >the 3
>> >>> new gluster  6.9 hosts. After having many errors from the oVirt
>> >interface,
>> >>> I gave up and removed the 6 new hosts from the cluster. That is
>> >where the
>> >>> problems started. The intent was to expand the gluster cluster
>while
>> >making
>> >>> 2 new volumes for that cluster. The ovirt compute cluster would
>> >allow for
>> >>> efficient VM migration between 9 hosts -- while having separate
>> >gluster
>> >>> volumes for safety purposes.
>> >>>
>> >>> Looking at the brick logs, I see where there are acl errors
>starting
>> >from
>> >>> the time of the removal of the 6 new hosts.
>> >>>
>> >>> Please check out the attached brick log from 6/14-18. The events
>> >started
>> >>> on 6/17.
>> >>>
>> >>> I wish I had a downgrade path.
>> >>>
>> >>> Thank You For The Help !!
>> >>>
>> >>> On Sat, Jun 20, 2020 at 7:47 PM Strahil Nikolov
>> >
>> >>> wrote:
>> >>>
>>  Hi ,
>> 
>> 
>>  This one really looks like the ACL bug I was hit with when I
>> >updated
>>  from Gluster v6.5 to 6.6 and later from 7.0 to 7.2.
>> 
>>  Did you update your setup recently ? Did 

[ovirt-users] Hosted engine deployment doesn't add the host(s) to the /etc/hosts engine, even if hostname doesn't get resolved by DNS server

2020-06-21 Thread Gilboa Davara
Hello,

Following the previous email, I think I'm hitting an odd problem, not
sure if it's my mistake or an actual bug.
1. Newly deployed 4.4 self-hosted engine on localhost NFS storage on a
single node.
2. Installation failed during the final phase with a non-descriptive
error message [1].
3. Log attached.
4. Even though the installation seemed to have failed, I managed to
connect to the ovirt console, and noticed it failed to connect to the
host.
5. SSH into the hosted engine, and noticed it cannot resolve the host hostname.
6. Added the missing /etc/hosts entry, restarted the ovirt-engine
service, and all is green.
7. Looking the deployment log, I'm seeing the following message:
"[WARNING] Failed to resolve gilboa-wx-ovirt.localdomain using DNS, it
can be resolved only locally", which means the ansible was aware the
my DNS server doesn't resolve the host hostname, but didn't add the
missing /etc/hosts entry / and or errored out.

A. Is it a bug, or is it PBKAC?
B. What are the chances that I have a working ovirt (test) setup?

- Gilboa

[1] [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts":
{"ovirt_vms": [{"affinity_labels": [], "applications": [], "bios":
{"boot_menu": {"enabled": false}, "type": "cluster_default"},
"cdroms": [], "cluster": {"href":
"/ovirt-engine/api/clusters/1ac7525a-b3d1-11ea-9c7a-00163e57d088",
"id": "1ac7525a-b3d1-11ea-9c7a-00163e57d088"}, "comment": "", "cpu":
{"architecture": "x86_64", "topology": {"cores": 1, "sockets": 4,
"threads": 1}}, "cpu_profile": {"href":
"/ovirt-engine/api/cpuprofiles/58ca604e-01a7-003f-01de-0250",
"id": "58ca604e-01a7-003f-01de-0250"}, "cpu_shares": 0,
"creation_time": "2020-06-21 11:15:08.207000-04:00",
"delete_protected": false, "description": "", "disk_attachments": [],
"display": {"address": "127.0.0.1", "allow_override": false,
"certificate": {"content": "-BEGIN
CERTIFICATE-\nMIID3jCCAsagAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwUTELMAkGA1UEBhMCVVMxFDASBgNVBAoM\nC2xvY2FsZG9tYWluMSwwKgYDVQQDDCNnaWxib2Etd3gtdm1vdmlydC5sb2NhbGRvbWFpbi40MTE5\nMTAeFw0yMDA2MjAxNTA3MTFaFw0zMDA2MTkxNTA3MTFaMFExCzAJBgNVBAYTAlVTMRQwEgYDVQQK\nDAtsb2NhbGRvbWFpbjEsMCoGA1UEAwwjZ2lsYm9hLXd4LXZtb3ZpcnQubG9jYWxkb21haW4uNDEx\nOTEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCUNgcCn28BMlMcadFZPR9JAWjOWyh0\nWMQffOSKUlr7H+6K02IdjCR5K9bR9moAlMA4dNzF/NJa12BlCmDkwOSsgZl+NK/Ut3kqfPp4CqMl\nU3jkJzqRnh0rqOFnQ4Q1tsejziH1MSiH5/eb4A3g2s0awXF6K+JRMp2MB9wYQx//tZrvhTLprK+Y\n9jXdQFZby8j+/9pqIdN7uoYbuqESRNcfIJ0WigJ10/IOAwloT0MASwyVtCRTCCXNE4PRN+Lexlcc\nxXq2QZ0zG8u3leLT6/J87PCP/OEj976fZ19q83stWjygu4+UiWS+QStlrzc1U+aGVxa+sO+9mv3f\n6CwT0clvAgMBAAGjgb8wgbwwHQYDVR0OBBYEFOiEmL8+rz3I4j5rmL+ws47Jv5KiMHoGA1UdIwRz\nMHGAFOiEmL8+rz3I4j5rmL+ws47Jv5KioVWkUzBRMQswCQYDVQQGEwJVUzEUMBIGA1UECgwLbG9j\nYWxkb21haW4xLDAqBgNVBAMMI2dpbGJvYS13eC12bW92aXJ0LmxvY2FsZG9tYWluLjQxMTkxggIQ\nADAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEAStVI\nhHRrw5aa3YUNcwYh+kQfS47Es12nNRFeVVzbXj9CLS/TloYjyXEyZvFmYyyjNvuj4/3WcQDfeaG6\nTUGoFJ1sleOMT04WYWNJGyvsOfokT+I7yrBsVMg/7vip8UQV0ttmVoY/kMhZufwAUNlsZyh6F2o2\nNpAAcdLoguHo3UCGyaL8pF4G0NOAR/eV1rpl4VikqehUsXZ1sYzYZfK98xXrmepI42Lt3B2L6f9t\ngzYJ99jsrOGFhgvgV0H+PclviIdz79Jj3ZpPhezHkNQyrp0GOM0rqW+9xy50tlCQJ4rjdrRxnr21\nGpD3ZaQ2KSwGU79pnnRT6m7MSQ8irci3/A==\n-END
CERTIFICATE-\n", "organization": "localdomain", "subject":
"O=localdomain,CN=gilboa-wx-ovirt.localdomain"}, "copy_paste_enabled":
true, "disconnect_action": "LOCK_SCREEN", "file_transfer_enabled":
true, "monitors": 1, "port": 5900, "single_qxl_pci": false,
"smartcard_enabled": false, "type": "vnc"}, "fqdn":
"gilboa-wx-vmovirt.localdomain", "graphics_consoles": [],
"guest_operating_system": {"architecture": "x86_64", "codename": "",
"distribution": "CentOS Linux", "family": "Linux", "kernel":
{"version": {"build": 0, "full_version":
"4.18.0-147.8.1.el8_1.x86_64", "major": 4, "minor": 18, "revision":
147}}, "version": {"full_version": "8", "major": 8}},
"guest_time_zone": {"name": "EDT", "utc_offset": "-04:00"},
"high_availability": {"enabled": false, "priority": 0}, "host":
{"href": "/ovirt-engine/api/hosts/5ca55132-6d20-4a7f-81a8-717095ba8f78",
"id": "5ca55132-6d20-4a7f-81a8-717095ba8f78"}, "host_devices": [],
"href": "/ovirt-engine/api/vms/60ba9f1a-cdb1-406e-810d-187dbdd7775c",
"id": "60ba9f1a-cdb1-406e-810d-187dbdd7775c", "io": {"threads": 1},
"katello_errata": [], "large_icon": {"href":
"/ovirt-engine/api/icons/a753f77a-89a4-4b57-9c23-d23bd61ebdaf", "id":
"a753f77a-89a4-4b57-9c23-d23bd61ebdaf"}, "memory": 8589934592,
"memory_policy": {"guaranteed": 8589934592, "max": 8589934592},
"migration": {"auto_converge": "inherit", "compressed": "inherit",
"encrypted": "inherit"}, "migration_downtime": -1,
"multi_queues_enabled": true, "name": "external-HostedEngineLocal",
"next_run_configuration_exists": false, "nics": [], "numa_nodes": [],
"numa_tune_mode": "interleave", "origin": "external",
"original_template": {"href":
"/ovirt-engine/api/templates/----",
"id": 

[ovirt-users] Re: Hosted engine deployment fails consistently when trying to download files.

2020-06-21 Thread Gilboa Davara
On Thu, Jun 18, 2020 at 2:54 PM Yedidyah Bar David  wrote:
>
> On Thu, Jun 18, 2020 at 2:37 PM Gilboa Davara  wrote:
> >
> > On Wed, Jun 17, 2020 at 12:35 PM Yedidyah Bar David  wrote:
> > > > However, when trying to install 4.4 on the test CentOS 8.x (now 8.2
> > > > after yesterday release), either manually (via hosted-engine --deploy)
> > > > or by using cockpit, fails when trying to download packages (see
> > > > attached logs) during the hosted engine deployment phase.
> > >
> > > Right. Didn't check them - I guess it's the same, no?
> >
> > Most likely you are correct. That said, the console version is more verbose.
> >
> >
> > > > Just to be clear, it is the hosted engine VM (during the deployment
> > > > process) that fails to automatically download packages, _not_ the
> > > > host.
> > >
> > > Exactly. That's why I asked you (because the logs do not reveal that)
> > > to manually login there and try to install (update) the package, and
> > > see what happens, why it failes, etc. Can you please try that? Thanks.
> >
> > Sadly enough, the failure comes early in the hosted engine deployment
> > process, making the VM completely inaccessible.
> > While I see qemu-kvm briefly start, it usually dies before I have any
> > chance to access it.
> >
> > Can I somehow prevent hosted-engine --deploy from destroying the
> > hosted engine VM, when the deployment fails, giving me access to it?
>
> This is how it should behave normally, it does not kill the VM.
> Perhaps check logs, try to find who/what killed it.
>
> Anyway: Earlier today I pushed this patch:
>
> https://gerrit.ovirt.org/109730
>
> Didn't yet get to try verifying it. Would you like to try? You can get
> an RPM from the CI build linked there, or download the patch and apply
> it manually (in the "gitweb" link [1]).
>
> Then, you can do:
>
> hosted-engine --deploy --ansible-extra-vars=he_offline_deployment=true
>
> If you try this, please share the results. Thanks!
>
> [1] 
> https://gerrit.ovirt.org/gitweb?p=ovirt-hosted-engine-setup.git;a=commitdiff_plain;h=f77fa8b84ed6d8a74cbe56b95accb1e8131afbb5
>
> Best regards,
> --
> Didi
>

Good news. I managed to connect to the VM and solve the problem.

For some odd reason our primary DNS server had upstream connection
issues and all the requests were silently handled by our secondary DNS
server.
Not sure I understand why, but while the ovirt host did manage to
silently spill over to the secondary DNS, the hosted engine, at least
during the initial deployment phase (when it still uses the host's
dnsmasq), failed to spill over to the secondary DNS server and the
deployment failed.
Once I fixed our primary DNS upstream connection issues, the installer
managed to download packages successfully (but failed once I
provisioned the storage, more on that in a different mail).

Many thanks, again, for taking the time to assist me!
(And hope it helps anyone facing the same issue in the future)

- Gilboa



- Gilboa
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZ2OIGKWMFKL3SQTCNFKLAXPMK3ES65B/


[ovirt-users] Re: Multiple Gluster ACL issues with oVirt

2020-06-21 Thread C Williams
Hello,

I look forward to hearing back about a fix for this !

Thank You All For Your Help !

On Sun, Jun 21, 2020 at 9:47 AM Sahina Bose  wrote:

> Thanks Strahil.
>
> Adding Sas and Ravi for their inputs.
>
> On Sun, 21 Jun 2020 at 6:11 PM, Strahil Nikolov 
> wrote:
>
>> Hello Sahina, Sandro,
>>
>> I have noticed that the ACL issue with Gluster (
>> https://github.com/gluster/glusterfs/issues/876) is  happening to
>> multiple oVirt users  (so far at least 5)  and I think that this  issue
>> needs greater attention.
>> Did anyone from the RHHI team managed  to reproduce the  bug  ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AGCTENWEXPTDCNZFZDYRXPRZ2QF422SL/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z5ATYKPCJ3B4K7MEDORL76HE5ASX6PKA/


[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread C Williams
Strahil,

Thanks for the follow up !

How did you copy the data to another volume ?

I have set up another storage domain GLCLNEW1 with a new volume imgnew1 .
How would you copy all of the data from the problematic domain GLCL3 with
volume images3 to GLCLNEW1 and volume imgnew1 and preserve all the VMs, VM
disks, settings, etc. ?

Remember all of the regular ovirt disk copy, disk move, VM export  tools
are failing and my VMs and disks are trapped on domain GLCL3 and volume
images3 right now.

Please let me know

Thank You For Your Help !





On Sun, Jun 21, 2020 at 8:27 AM Strahil Nikolov 
wrote:

> Sorry to hear that.
> I can say that  for  me 6.5 was  working,  while 6.6 didn't and I upgraded
> to 7.0 .
> In the ended ,  I have ended  with creating a  new fresh volume and
> physically copying the data there,  then I detached the storage domains and
> attached  to the  new ones  (which holded the old  data),  but I could
> afford  the downtime.
> Also,  I can say that v7.0  (  but not 7.1  or anything later)  also
> worked  without the ACL  issue,  but it causes some trouble  in oVirt - so
> avoid that unless  you have no other options.
>
> Best Regards,
> Strahil  Nikolov
>
>
>
>
> На 21 юни 2020 г. 4:39:46 GMT+03:00, C Williams 
> написа:
> >Hello,
> >
> >Upgrading diidn't help
> >
> >Still acl errors trying to use a Virtual Disk from a VM
> >
> >[root@ov06 bricks]# tail bricks-brick04-images3.log | grep acl
> >[2020-06-21 01:33:45.665888] I [MSGID: 139001]
> >[posix-acl.c:263:posix_acl_log_permit_denied] 0-images3-access-control:
> >client:
>
> >CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >req(uid:107,gid:107,perm:1,ngrps:3),
> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >[Permission denied]
> >The message "I [MSGID: 139001]
> >[posix-acl.c:263:posix_acl_log_permit_denied] 0-images3-access-control:
> >client:
>
> >CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
> >gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
> >req(uid:107,gid:107,perm:1,ngrps:3),
> >ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
> >[Permission denied]" repeated 2 times between [2020-06-21
> >01:33:45.665888]
> >and [2020-06-21 01:33:45.806779]
> >
> >Thank You For Your Help !
> >
> >On Sat, Jun 20, 2020 at 8:59 PM C Williams 
> >wrote:
> >
> >> Hello,
> >>
> >> Based on the situation, I am planning to upgrade the 3 affected
> >hosts.
> >>
> >> My reasoning is that the hosts/bricks were attached to 6.9 at one
> >time.
> >>
> >> Thanks For Your Help !
> >>
> >> On Sat, Jun 20, 2020 at 8:38 PM C Williams 
> >> wrote:
> >>
> >>> Strahil,
> >>>
> >>> The gluster version on the current 3 gluster hosts is  6.7 (last
> >update
> >>> 2/26). These 3 hosts provide 1 brick each for the replica 3 volume.
> >>>
> >>> Earlier I had tried to add 6 additional hosts to the cluster. Those
> >new
> >>> hosts were 6.9 gluster.
> >>>
> >>> I attempted to make a new separate volume with 3 bricks provided by
> >the 3
> >>> new gluster  6.9 hosts. After having many errors from the oVirt
> >interface,
> >>> I gave up and removed the 6 new hosts from the cluster. That is
> >where the
> >>> problems started. The intent was to expand the gluster cluster while
> >making
> >>> 2 new volumes for that cluster. The ovirt compute cluster would
> >allow for
> >>> efficient VM migration between 9 hosts -- while having separate
> >gluster
> >>> volumes for safety purposes.
> >>>
> >>> Looking at the brick logs, I see where there are acl errors starting
> >from
> >>> the time of the removal of the 6 new hosts.
> >>>
> >>> Please check out the attached brick log from 6/14-18. The events
> >started
> >>> on 6/17.
> >>>
> >>> I wish I had a downgrade path.
> >>>
> >>> Thank You For The Help !!
> >>>
> >>> On Sat, Jun 20, 2020 at 7:47 PM Strahil Nikolov
> >
> >>> wrote:
> >>>
>  Hi ,
> 
> 
>  This one really looks like the ACL bug I was hit with when I
> >updated
>  from Gluster v6.5 to 6.6 and later from 7.0 to 7.2.
> 
>  Did you update your setup recently ? Did you upgrade gluster also ?
> 
>  You have to check the gluster logs in order to verify that, so you
> >can
>  try:
> 
>  1. Set Gluster logs to trace level (for details check:
> 
> >
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level
>  )
>  2. Power up a VM that was already off , or retry the procedure from
> >the
>  logs you sent.
>  3. Stop the trace level of the logs
>  4. Check libvirt logs on the host that was supposed to power up the
> >VM
>  (in case a VM was powered on)
>  5. Check the gluster brick logs on all nodes for ACL errors.
>  Here is a sample from my old logs:
> 
>  

[ovirt-users] Re: Multiple Gluster ACL issues with oVirt

2020-06-21 Thread Sahina Bose
Thanks Strahil.

Adding Sas and Ravi for their inputs.

On Sun, 21 Jun 2020 at 6:11 PM, Strahil Nikolov 
wrote:

> Hello Sahina, Sandro,
>
> I have noticed that the ACL issue with Gluster (
> https://github.com/gluster/glusterfs/issues/876) is  happening to
> multiple oVirt users  (so far at least 5)  and I think that this  issue
> needs greater attention.
> Did anyone from the RHHI team managed  to reproduce the  bug  ?
>
> Best Regards,
> Strahil Nikolov
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AGCTENWEXPTDCNZFZDYRXPRZ2QF422SL/


[ovirt-users] Multiple Gluster ACL issues with oVirt

2020-06-21 Thread Strahil Nikolov via Users
Hello Sahina, Sandro,

I have noticed that the ACL issue with Gluster 
(https://github.com/gluster/glusterfs/issues/876) is  happening to multiple 
oVirt users  (so far at least 5)  and I think that this  issue  needs greater 
attention.
Did anyone from the RHHI team managed  to reproduce the  bug  ?

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7OTGSH25GAB4HTSMFEME3UZ6VC65MU2E/


[ovirt-users] Re: Fwd: Fwd: Issues with Gluster Domain

2020-06-21 Thread Strahil Nikolov via Users
Sorry to hear that.
I can say that  for  me 6.5 was  working,  while 6.6 didn't and I upgraded to 
7.0 .
In the ended ,  I have ended  with creating a  new fresh volume and physically 
copying the data there,  then I detached the storage domains and attached  to 
the  new ones  (which holded the old  data),  but I could afford  the downtime.
Also,  I can say that v7.0  (  but not 7.1  or anything later)  also worked  
without the ACL  issue,  but it causes some trouble  in oVirt - so avoid that 
unless  you have no other options.

Best Regards,
Strahil  Nikolov




На 21 юни 2020 г. 4:39:46 GMT+03:00, C Williams  
написа:
>Hello,
>
>Upgrading diidn't help
>
>Still acl errors trying to use a Virtual Disk from a VM
>
>[root@ov06 bricks]# tail bricks-brick04-images3.log | grep acl
>[2020-06-21 01:33:45.665888] I [MSGID: 139001]
>[posix-acl.c:263:posix_acl_log_permit_denied] 0-images3-access-control:
>client:
>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
>gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>req(uid:107,gid:107,perm:1,ngrps:3),
>ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>[Permission denied]
>The message "I [MSGID: 139001]
>[posix-acl.c:263:posix_acl_log_permit_denied] 0-images3-access-control:
>client:
>CTX_ID:3697a7f1-44fb-4258-96b0-98cb4137d195-GRAPH_ID:0-PID:6706-HOST:ov06.ntc.srcle.com-PC_NAME:images3-client-0-RECON_NO:-0,
>gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
>req(uid:107,gid:107,perm:1,ngrps:3),
>ctx(uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
>[Permission denied]" repeated 2 times between [2020-06-21
>01:33:45.665888]
>and [2020-06-21 01:33:45.806779]
>
>Thank You For Your Help !
>
>On Sat, Jun 20, 2020 at 8:59 PM C Williams 
>wrote:
>
>> Hello,
>>
>> Based on the situation, I am planning to upgrade the 3 affected
>hosts.
>>
>> My reasoning is that the hosts/bricks were attached to 6.9 at one
>time.
>>
>> Thanks For Your Help !
>>
>> On Sat, Jun 20, 2020 at 8:38 PM C Williams 
>> wrote:
>>
>>> Strahil,
>>>
>>> The gluster version on the current 3 gluster hosts is  6.7 (last
>update
>>> 2/26). These 3 hosts provide 1 brick each for the replica 3 volume.
>>>
>>> Earlier I had tried to add 6 additional hosts to the cluster. Those
>new
>>> hosts were 6.9 gluster.
>>>
>>> I attempted to make a new separate volume with 3 bricks provided by
>the 3
>>> new gluster  6.9 hosts. After having many errors from the oVirt
>interface,
>>> I gave up and removed the 6 new hosts from the cluster. That is
>where the
>>> problems started. The intent was to expand the gluster cluster while
>making
>>> 2 new volumes for that cluster. The ovirt compute cluster would
>allow for
>>> efficient VM migration between 9 hosts -- while having separate
>gluster
>>> volumes for safety purposes.
>>>
>>> Looking at the brick logs, I see where there are acl errors starting
>from
>>> the time of the removal of the 6 new hosts.
>>>
>>> Please check out the attached brick log from 6/14-18. The events
>started
>>> on 6/17.
>>>
>>> I wish I had a downgrade path.
>>>
>>> Thank You For The Help !!
>>>
>>> On Sat, Jun 20, 2020 at 7:47 PM Strahil Nikolov
>
>>> wrote:
>>>
 Hi ,


 This one really looks like the ACL bug I was hit with when I
>updated
 from Gluster v6.5 to 6.6 and later from 7.0 to 7.2.

 Did you update your setup recently ? Did you upgrade gluster also ?

 You have to check the gluster logs in order to verify that, so you
>can
 try:

 1. Set Gluster logs to trace level (for details check:

>https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level
 )
 2. Power up a VM that was already off , or retry the procedure from
>the
 logs you sent.
 3. Stop the trace level of the logs
 4. Check libvirt logs on the host that was supposed to power up the
>VM
 (in case a VM was powered on)
 5. Check the gluster brick logs on all nodes for ACL errors.
 Here is a sample from my old logs:

 gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>13:19:41.489047] I
 [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-

>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
 gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
 req(uid:36,gid:36,perm:1,ngrps:3), ctx
 (uid:0,gid:0,in-groups:0,perm:000,updated-fop:INVALID, acl:-)
 [Permission denied]
 gluster_bricks-data_fast4-data_fast4.log:[2020-03-18
>13:22:51.818796] I
 [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied]
 0-data_fast4-access-control: client: CTX_ID:4a654305-d2e4-

>4a10-bad9-58d670d99a97-GRAPH_ID:0-PID:32412-HOST:ovirt1.localdomain-PC_NAME:data_fast4-client-0-RECON_NO:-19,
 gfid: be318638-e8a0-4c6d-977d-7a937aa84806,
 

[ovirt-users] Re: Adding iSCSI Target IP Address to Existing Storage Data Domain

2020-06-21 Thread Eyal Shenitzky
Hi,

We have an open RFE for editing the IP address of an existing ISCSI storage
domain

Currently, you should detach and re-add the domain.

On Sun, 21 Jun 2020 at 09:14,  wrote:

> Hi!
>
> Do you have suggestions if we can add a new iSCSI target IP address to an
> existing Storage Data Domain?
>
> Earlier, we had an issue where the storage device unexpectedly rebooted.
> It has 3 IP addresses used for iSCSI connections.
> For oVirt, we're connected to that storage device using 1 iSCSI Target IP
> address. The problem is that the adapters are down for that IP.
>
> What we're trying to do is to add the other IP addresses to connect to the
> LUNs/devices.
> Do you think simply logging in to the target will help?
>
> I've checked on previous threads and it was suggested that the storage
> data domain should be detached first then re-add using the new IP address.
>
> Thank you very much
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OK3GZJDDZ5G4DTLPG6QUA3AQELHNEGVR/
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ETRMKRZN3KTXAPI5NYC42SYAZWD3FE2J/


[ovirt-users] Adding iSCSI Target IP Address to Existing Storage Data Domain

2020-06-21 Thread jrbdeguzman05
Hi!

Do you have suggestions if we can add a new iSCSI target IP address to an 
existing Storage Data Domain?

Earlier, we had an issue where the storage device unexpectedly rebooted. It has 
3 IP addresses used for iSCSI connections.
For oVirt, we're connected to that storage device using 1 iSCSI Target IP 
address. The problem is that the adapters are down for that IP.

What we're trying to do is to add the other IP addresses to connect to the 
LUNs/devices.
Do you think simply logging in to the target will help?

I've checked on previous threads and it was suggested that the storage data 
domain should be detached first then re-add using the new IP address.

Thank you very much
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OK3GZJDDZ5G4DTLPG6QUA3AQELHNEGVR/