[ovirt-users] Re: oVirt and the future

2021-08-11 Thread Yedidyah Bar David
On Wed, Aug 11, 2021 at 10:03 PM  wrote:
>
> Hi all,
>  I'm looking for some information about the future of oVirt. With CentOS 
> going away have they talked about what they will be doing or moving to? I'd 
> like to see Ubuntu support.

I suggest to search the archives of this list - there were multiple
relevant discussions here in recent months. See e.g. recent thread
"[ovirt-users] upgrading to 4.4.6 with Rocky Linux 8" [0].

oVirt, as a project, starting with IIRC version 4.4.6, moved
development to CentOS Stream 8.

There are thoughts about moving to CentOS Stream 9, see e.g. [1].

See the release notes pages per version for details about that
version, e.g. [2], which says:

"This release is available now for Red Hat Enterprise Linux 8.4 (or
similar) and CentOS Stream."

This means, in practice, that if you want to use e.g. AlmaLinux OS or
Rocky Linux, you have to wait until they rebuild the sources of RHEL
8.4 (which might already have happened by now, just explaining).

Some years ago there was some work on Debian support, which AFAIU
haven't matured and eventually neglected.

I personally think that adding support for Debian is a great idea, but
am not aware of anyone working on this. Contributors are welcome!

Best regards,

[0] 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/IBFLZG2TFIOSO2LWHAQYIVCC2Z6TWJG5/

[1] 
https://www.ovirt.org/develop/release-management/features/integration/centos-9-stream-support.html

[2] https://www.ovirt.org/release/4.4.7/


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYZYII5XULUTDYCXGAEPRON6QGE4CTL4/



--
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZVPMGL67UQSN6SDV776AOHDQSQVXBLQC/


[ovirt-users] oVirt and the future

2021-08-11 Thread thilburn
Hi all,
 I'm looking for some information about the future of oVirt. With CentOS going 
away have they talked about what they will be doing or moving to? I'd like to 
see Ubuntu support.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYZYII5XULUTDYCXGAEPRON6QGE4CTL4/


[ovirt-users] Automigration of VMs from other hypervisors

2021-08-11 Thread KK CHN
Hi list,

I am in the process of migrating 150+ VMs running on Rhevm4.1 toKVM
based OpenStack installation ( Ussuri with KVm and glance as image storage.)

What I am doing now, manually shutdown each VM through RHVM GUI  and export
to export domain and  scp those image files of each VM to our OpenStack
controller node and uploading to glance and creating each VM manually.

query 1.
Is there a better way to automate this migration by any utility or scripts ?
Any one done this kind of automatic migration before what was your
approach?  Or whats the better approach instead of doing manual migration ?

Or only manually I have to repeat the process for all 150+ Virtual
machines?  ( guest VMs are  CentOS7 and Redhat Linux 7 with LVM data
partitions attached)

Kindly share your thoughts..

Query 2.

other than this 150+ VMs Redhat Linux 7 and Centos VMs  on Rhevm 4.1, I
have to migrate  50+ VMs  which hosted on hyperV.

What the method / approach for exporting from HyperV and importing to
OpenStack Ussuri version  with glance with KVM hpervisor ? ( This is the
ffirst time I am going to use hyperV, no much idea about export from hyperv
and Import to KVM)

  Will the images exported form HyperV(vhdx image disks with single disk
and multiple disk(max 3 disk)  VMs) can be directly imported to KVM ? does
KVM support this or  need to modify vhdx disk images to any other format ?
What is the  best approach should be in case of HyperV hosted VMs( Windows
2012 guest machines and Linux guest machines ) to be imported to KVM based
OpenStack(Ussuri version with glance as image storage ).

Thanks in advance

Kris
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7KSQLVOSV5I6QGBAYC4U7SWQIJ2PPC5/


[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-11 Thread Nir Soffer
On Wed, Aug 11, 2021 at 4:24 PM Arik Hadas  wrote:
>
>
>
> On Wed, Aug 11, 2021 at 2:56 PM Benny Zlotnik  wrote:
>>
>> > If your vm is temporary and you like to drop the data written while
>> > the vm is running, you
>> > could use a temporary disk based on the template. This is called a
>> > "transient disk" in vdsm.
>> >
>> > Arik, maybe you remember how transient disks are used in engine?
>> > Do we have an API to run a VM once, dropping the changes to the disk
>> > done while the VM was running?
>>
>> I think that's how stateless VMs work
>
>
> +1
> It doesn't work exactly like Nir wrote above - stateless VMs that are 
> thin-provisioned would have a qcow volume on top of each template's volume 
> and when they starts, their active volume would be a qcow volume on top of 
> the aforementioned qcow volume and that active volume will be removed when 
> the VM goes down
> But yeah, stateless VMs are intended for such use case

I was referring to transient disks - created in vdsm:
https://github.com/oVirt/vdsm/blob/45903d01e142047093bf844628b5d90df12b6ffb/lib/vdsm/virt/vm.py#L3789

This creates a *local* temporary file using qcow2 format, using the
disk on shared
storage as a backing file.

Maybe this is not used by engine?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JTB6P4N5G34JK3QO375XJVIIF4OZHTYH/


[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-11 Thread Nir Soffer
On Wed, Aug 11, 2021 at 3:13 PM Shantur Rathore
 wrote:
>
>
>> Yes, on file based storage a snapshot is a file, and it grows as
>> needed.  On block based
>> storage, a snapshot is a logical volume, and oVirt needs to extend it
>> when needed.
>
>
> Forgive my ignorance, coming from vSphere background where a filesystem was 
> created on iSCSI LUN.
> I take that this isn't the case in case of a iSCSI Storage Domain in oVirt.

Yes, for block storage, we create a LVM volume group with one or more LUNs
to create a storage domain. Disks are created using LVM logical volume on
this VG.

When you create a vm from template on block storage we create a new 1g
logical volume for the vm disk, and create a qcow2 image on this logical volume
with the backing file using the template logical volume.

The logical volume needs to be extended when free space is low. This is done
automatically on the host running the VM, but since oVirt is not in the data
path, the VM may write data too fast and pause trying to write after the end
of the logical volume. In this case the VM will be resumed when oVirt finish
to extend the volume.

> On Wed, Aug 11, 2021 at 12:26 PM Nir Soffer  wrote:
>>
>> On Wed, Aug 11, 2021 at 12:43 AM Shantur Rathore
>>  wrote:
>> >
>> > Thanks for the detailed response Nir.
>> >
>> > In my use case, we keep creating VMs from templates and deleting them so 
>> > we need the VMs to be created quickly and cloning it will use a lot of 
>> > time and storage.
>>
>> That's a good reason to use a template.
>>
>> If your vm is temporary and you like to drop the data written while
>> the vm is running, you
>> could use a temporary disk based on the template. This is called a
>> "transient disk" in vdsm.
>>
>> Arik, maybe you remember how transient disks are used in engine?
>> Do we have an API to run a VM once, dropping the changes to the disk
>> done while the VM was running?
>>
>> > I will try to add the config and try again tomorrow. Also I like the 
>> > Managed Block storage idea, I had read about it in the past and used it 
>> > with Ceph.
>> >
>> > Just to understand it better, is this issue only on iSCSI based storage?
>>
>> Yes, on file based storage a snapshot is a file, and it grows as
>> needed.  On block based
>> storage, a snapshot is a logical volume, and oVirt needs to extend it
>> when needed.
>>
>> Nir
>>
>> > Thanks again.
>> >
>> > Regards
>> > Shantur
>> >
>> > On Tue, Aug 10, 2021 at 9:26 PM Nir Soffer  wrote:
>> >>
>> >> On Tue, Aug 10, 2021 at 4:24 PM Shantur Rathore
>> >>  wrote:
>> >> >
>> >> > Hi all,
>> >> >
>> >> > I have a setup as detailed below
>> >> >
>> >> > - iSCSI Storage Domain
>> >> > - Template with Thin QCOW2 disk
>> >> > - Multiple VMs from Template with Thin disk
>> >>
>> >> Note that a single template disk used by many vms can become a performance
>> >> bottleneck, and is a single point of failure. Cloning the template when 
>> >> creating
>> >> vms avoids such issues.
>> >>
>> >> > oVirt Node 4.4.4
>> >>
>> >> 4.4.4 is old, you should upgrade to 4.4.7.
>> >>
>> >> > When the VMs boots up it downloads some data to it and that leads to 
>> >> > increase in volume size.
>> >> > I see that every few seconds the VM gets paused with
>> >> >
>> >> > "VM X has been paused due to no Storage space error."
>> >> >
>> >> >  and then after few seconds
>> >> >
>> >> > "VM X has recovered from paused back to up"
>> >>
>> >> This is normal operation when a vm writes too quickly and oVirt cannot
>> >> extend the disk quick enough. To mitigate this, you can increase the
>> >> volume chunk size.
>> >>
>> >> Created this configuration drop in file:
>> >>
>> >> # cat /etc/vdsm/vdsm.conf.d/99-local.conf
>> >> [irs]
>> >> volume_utilization_percent = 25
>> >> volume_utilization_chunk_mb = 2048
>> >>
>> >> And restart vdsm.
>> >>
>> >> With this setting, when free space in a disk is 1.5g, the disk will
>> >> be extended by 2g. With the default setting, when free space is
>> >> 0.5g the disk was extended by 1g.
>> >>
>> >> If this does not eliminate the pauses, try a larger chunk size
>> >> like 4096.
>> >>
>> >> > Sometimes after a many pause and recovery the VM dies with
>> >> >
>> >> > "VM X is down with error. Exit message: Lost connection with qemu 
>> >> > process."
>> >>
>> >> This means qemu has crashed. You can find more info in the vm log at:
>> >> /var/log/libvirt/qemu/vm-name.log
>> >>
>> >> We know about bugs in qemu that cause such crashes when vm disk is
>> >> extended. I think the latest bug was fixed in 4.4.6, so upgrading to 4.4.7
>> >> will fix this issue.
>> >>
>> >> Even with these settings, if you have a very bursty io in the vm, it may
>> >> become paused. The only way to completely avoid these pauses is to
>> >> use a preallocated disk, or use file storage (e.g. NFS). Preallocated disk
>> >> can be thin provisioned on the server side so it does not mean you need
>> >> more storage, but you will not be able to use shared templates in the way
>> >> you use them now. You 

[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-11 Thread Arik Hadas
On Wed, Aug 11, 2021 at 2:56 PM Benny Zlotnik  wrote:

> > If your vm is temporary and you like to drop the data written while
> > the vm is running, you
> > could use a temporary disk based on the template. This is called a
> > "transient disk" in vdsm.
> >
> > Arik, maybe you remember how transient disks are used in engine?
> > Do we have an API to run a VM once, dropping the changes to the disk
> > done while the VM was running?
>
> I think that's how stateless VMs work
>

+1
It doesn't work exactly like Nir wrote above - stateless VMs that are
thin-provisioned would have a qcow volume on top of each template's volume
and when they starts, their active volume would be a qcow volume on top of
the aforementioned qcow volume and that active volume will be removed when
the VM goes down
But yeah, stateless VMs are intended for such use case
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6OALP5LAFBRYZ46ONUFPC2JV7UIOLMJF/


[ovirt-users] Re: Ubuntu 20.04 cloud-init

2021-08-11 Thread Pavel Šipoš

Thank you for your answer.

I dig further and figure that cleaning previous cloud-init configuration 
files on template VM and reinstalling cloud-init package helped. So it's 
working now.


Pavel

On 10/08/2021 13:20, Florian Schmid via Users wrote:

Hello Pavel,

we are also using 4.3 and for us, cloud-init is working on 20.04.
We are using the unmodified Ubuntu 20.04 cloud image and cloud-init is adding 
ssh keys to root user and is setting IP address.

The rest is then done via ansible on our side.

Br Florian

- Ursprüngliche Mail -
Von: "Pavel Šipoš" 
An: "users" 
Gesendet: Dienstag, 10. August 2021 10:06:30
Betreff: [ovirt-users] Ubuntu 20.04 cloud-init

Hi.

We are using oVirt version 4.3.10.4-1.el7

I am trying to use clod-init function in ovirt to set password and add
ssh keys for VM on first boot.
It is working perfectly on Centos7, Centos8, but not with Ubuntu 20.04.02 .

On VM Cloud-init package and quemu-guest-agent are installed, services
are running - I checked that and then used Run once to test further but
no luck.
It looks like its not even trying to set anything.
I am open to suggestions how to debug this. I see no errors if I look
into /var/log/cloud-init logs on VM.

Did anyone else had similar problem?
Can anyone confirm that cloud-init function in ovirt should work with
ubuntu 20.04?

Packages used:
cloud-init 21.2-3-g899bfaa9-0ubuntu2~20.04.1
qemu-guest-agent 1:4.2-3ubuntu6.17

I made ovirt template myself using packer (qemu) that also uses
cloud-init for bootstrapping.

Thank you in advance!
Pavel


--
--
Pavel Sipos, Arnes 
ARNES, p.p. 7, SI-1001 Ljubljana, Slovenia
T: +386 1 479 88 00
W: www.arnes.si, aai.arnes.si




smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MI5GWI2FWV4Q6LNVEUAOA66GZA7D7RIK/


[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-11 Thread Shantur Rathore
> Yes, on file based storage a snapshot is a file, and it grows as
> needed.  On block based
> storage, a snapshot is a logical volume, and oVirt needs to extend it
> when needed.


Forgive my ignorance, coming from vSphere background where a filesystem was
created on iSCSI LUN.
I take that this isn't the case in case of a iSCSI Storage Domain in oVirt.

On Wed, Aug 11, 2021 at 12:26 PM Nir Soffer  wrote:

> On Wed, Aug 11, 2021 at 12:43 AM Shantur Rathore
>  wrote:
> >
> > Thanks for the detailed response Nir.
> >
> > In my use case, we keep creating VMs from templates and deleting them so
> we need the VMs to be created quickly and cloning it will use a lot of time
> and storage.
>
> That's a good reason to use a template.
>
> If your vm is temporary and you like to drop the data written while
> the vm is running, you
> could use a temporary disk based on the template. This is called a
> "transient disk" in vdsm.
>
> Arik, maybe you remember how transient disks are used in engine?
> Do we have an API to run a VM once, dropping the changes to the disk
> done while the VM was running?
>
> > I will try to add the config and try again tomorrow. Also I like the
> Managed Block storage idea, I had read about it in the past and used it
> with Ceph.
> >
> > Just to understand it better, is this issue only on iSCSI based storage?
>
> Yes, on file based storage a snapshot is a file, and it grows as
> needed.  On block based
> storage, a snapshot is a logical volume, and oVirt needs to extend it
> when needed.
>
> Nir
>
> > Thanks again.
> >
> > Regards
> > Shantur
> >
> > On Tue, Aug 10, 2021 at 9:26 PM Nir Soffer  wrote:
> >>
> >> On Tue, Aug 10, 2021 at 4:24 PM Shantur Rathore
> >>  wrote:
> >> >
> >> > Hi all,
> >> >
> >> > I have a setup as detailed below
> >> >
> >> > - iSCSI Storage Domain
> >> > - Template with Thin QCOW2 disk
> >> > - Multiple VMs from Template with Thin disk
> >>
> >> Note that a single template disk used by many vms can become a
> performance
> >> bottleneck, and is a single point of failure. Cloning the template when
> creating
> >> vms avoids such issues.
> >>
> >> > oVirt Node 4.4.4
> >>
> >> 4.4.4 is old, you should upgrade to 4.4.7.
> >>
> >> > When the VMs boots up it downloads some data to it and that leads to
> increase in volume size.
> >> > I see that every few seconds the VM gets paused with
> >> >
> >> > "VM X has been paused due to no Storage space error."
> >> >
> >> >  and then after few seconds
> >> >
> >> > "VM X has recovered from paused back to up"
> >>
> >> This is normal operation when a vm writes too quickly and oVirt cannot
> >> extend the disk quick enough. To mitigate this, you can increase the
> >> volume chunk size.
> >>
> >> Created this configuration drop in file:
> >>
> >> # cat /etc/vdsm/vdsm.conf.d/99-local.conf
> >> [irs]
> >> volume_utilization_percent = 25
> >> volume_utilization_chunk_mb = 2048
> >>
> >> And restart vdsm.
> >>
> >> With this setting, when free space in a disk is 1.5g, the disk will
> >> be extended by 2g. With the default setting, when free space is
> >> 0.5g the disk was extended by 1g.
> >>
> >> If this does not eliminate the pauses, try a larger chunk size
> >> like 4096.
> >>
> >> > Sometimes after a many pause and recovery the VM dies with
> >> >
> >> > "VM X is down with error. Exit message: Lost connection with qemu
> process."
> >>
> >> This means qemu has crashed. You can find more info in the vm log at:
> >> /var/log/libvirt/qemu/vm-name.log
> >>
> >> We know about bugs in qemu that cause such crashes when vm disk is
> >> extended. I think the latest bug was fixed in 4.4.6, so upgrading to
> 4.4.7
> >> will fix this issue.
> >>
> >> Even with these settings, if you have a very bursty io in the vm, it may
> >> become paused. The only way to completely avoid these pauses is to
> >> use a preallocated disk, or use file storage (e.g. NFS). Preallocated
> disk
> >> can be thin provisioned on the server side so it does not mean you need
> >> more storage, but you will not be able to use shared templates in the
> way
> >> you use them now. You can create vm from template, but the template
> >> is cloned to the new vm.
> >>
> >> Another option with (still tech preview) is Managed Block Storage
> (Cinder
> >> based storage). If your storage server is supported by Cinder, we can
> >> managed it using cinderlib. In this setup every disk is a LUN, which may
> >> be thin provisioned on the storage server. This can also offload storage
> >> operations to the server, like cloning disks, which may be much faster
> and
> >> more efficient.
> >>
> >> Nir
> >>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-11 Thread Benny Zlotnik
> If your vm is temporary and you like to drop the data written while
> the vm is running, you
> could use a temporary disk based on the template. This is called a
> "transient disk" in vdsm.
>
> Arik, maybe you remember how transient disks are used in engine?
> Do we have an API to run a VM once, dropping the changes to the disk
> done while the VM was running?

I think that's how stateless VMs work
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EAVA367YF6F3AHHPU7K23PFOR5ZTZBBI/


[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-11 Thread Nir Soffer
On Wed, Aug 11, 2021 at 12:43 AM Shantur Rathore
 wrote:
>
> Thanks for the detailed response Nir.
>
> In my use case, we keep creating VMs from templates and deleting them so we 
> need the VMs to be created quickly and cloning it will use a lot of time and 
> storage.

That's a good reason to use a template.

If your vm is temporary and you like to drop the data written while
the vm is running, you
could use a temporary disk based on the template. This is called a
"transient disk" in vdsm.

Arik, maybe you remember how transient disks are used in engine?
Do we have an API to run a VM once, dropping the changes to the disk
done while the VM was running?

> I will try to add the config and try again tomorrow. Also I like the Managed 
> Block storage idea, I had read about it in the past and used it with Ceph.
>
> Just to understand it better, is this issue only on iSCSI based storage?

Yes, on file based storage a snapshot is a file, and it grows as
needed.  On block based
storage, a snapshot is a logical volume, and oVirt needs to extend it
when needed.

Nir

> Thanks again.
>
> Regards
> Shantur
>
> On Tue, Aug 10, 2021 at 9:26 PM Nir Soffer  wrote:
>>
>> On Tue, Aug 10, 2021 at 4:24 PM Shantur Rathore
>>  wrote:
>> >
>> > Hi all,
>> >
>> > I have a setup as detailed below
>> >
>> > - iSCSI Storage Domain
>> > - Template with Thin QCOW2 disk
>> > - Multiple VMs from Template with Thin disk
>>
>> Note that a single template disk used by many vms can become a performance
>> bottleneck, and is a single point of failure. Cloning the template when 
>> creating
>> vms avoids such issues.
>>
>> > oVirt Node 4.4.4
>>
>> 4.4.4 is old, you should upgrade to 4.4.7.
>>
>> > When the VMs boots up it downloads some data to it and that leads to 
>> > increase in volume size.
>> > I see that every few seconds the VM gets paused with
>> >
>> > "VM X has been paused due to no Storage space error."
>> >
>> >  and then after few seconds
>> >
>> > "VM X has recovered from paused back to up"
>>
>> This is normal operation when a vm writes too quickly and oVirt cannot
>> extend the disk quick enough. To mitigate this, you can increase the
>> volume chunk size.
>>
>> Created this configuration drop in file:
>>
>> # cat /etc/vdsm/vdsm.conf.d/99-local.conf
>> [irs]
>> volume_utilization_percent = 25
>> volume_utilization_chunk_mb = 2048
>>
>> And restart vdsm.
>>
>> With this setting, when free space in a disk is 1.5g, the disk will
>> be extended by 2g. With the default setting, when free space is
>> 0.5g the disk was extended by 1g.
>>
>> If this does not eliminate the pauses, try a larger chunk size
>> like 4096.
>>
>> > Sometimes after a many pause and recovery the VM dies with
>> >
>> > "VM X is down with error. Exit message: Lost connection with qemu process."
>>
>> This means qemu has crashed. You can find more info in the vm log at:
>> /var/log/libvirt/qemu/vm-name.log
>>
>> We know about bugs in qemu that cause such crashes when vm disk is
>> extended. I think the latest bug was fixed in 4.4.6, so upgrading to 4.4.7
>> will fix this issue.
>>
>> Even with these settings, if you have a very bursty io in the vm, it may
>> become paused. The only way to completely avoid these pauses is to
>> use a preallocated disk, or use file storage (e.g. NFS). Preallocated disk
>> can be thin provisioned on the server side so it does not mean you need
>> more storage, but you will not be able to use shared templates in the way
>> you use them now. You can create vm from template, but the template
>> is cloned to the new vm.
>>
>> Another option with (still tech preview) is Managed Block Storage (Cinder
>> based storage). If your storage server is supported by Cinder, we can
>> managed it using cinderlib. In this setup every disk is a LUN, which may
>> be thin provisioned on the storage server. This can also offload storage
>> operations to the server, like cloning disks, which may be much faster and
>> more efficient.
>>
>> Nir
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NH3ZZMYOCTVKDF4GYKFOSQYPP2IK3JFT/


[ovirt-users] Re: Cannot restart ovirt after massive failure.

2021-08-11 Thread Yedidyah Bar David
On Tue, Aug 10, 2021 at 9:20 PM Gilboa Davara  wrote:
>
> Hello,
>
> Many thanks again for taking the time to try and help me recover this machine 
> (even though it would have been far easier to simply redeploy it...)
>
>> >
>> >
>> > Sadly enough, it seems that --clean-metadata requires an active agent.
>> > E.g.
>> > $ hosted-engine --clean-metadata
>> > The hosted engine configuration has not been retrieved from shared 
>> > storage. Please ensure that ovirt-ha-agent
>> > is running and the storage server is reachable.
>>
>> Did you try to search the net/list archives?
>
>
> Yes. All of them seem to repeat the same clean-metadata command (which fails).

I suppose we need better documentation. Sorry. Perhaps open a
bug/issue about that.

>
>>
>>
>> >
>> > Can I manually delete the metadata state files?
>>
>> Yes, see e.g.:
>>
>> https://lists.ovirt.org/pipermail/users/2016-April/072676.html
>>
>> As an alternative to the 'find' command there, you can also find the IDs 
>> with:
>>
>> $ grep metadata /etc/ovirt-hosted-engine/hosted-engine.conf
>>
>> Best regards,
>> --
>> Didi
>
>
> Yippie! Success (At least it seems that way...)
>
> Following https://lists.ovirt.org/pipermail/users/2016-April/072676.html,
> I stopped the broker and agent services, archived the existing hosted 
> metadata files, created an empty 1GB metadata file using dd, (dd if=/dev/zero 
> of=/run/vdsm/storage// bs=1M count=1024), making double sure 
> permissions (0660 / 0644), owner (vdsm:kvm) and SELinux labels (restorecon, 
> just incase) stay the same.
> Let everything settle down.
> Restarted the services
> ... and everything is up again :)
>
> I plan to let the engine run overnight with zero VMs (making sure all backups 
> are fully up-to-date).
> Once done, I'll return to normal (until I replace this setup with a normal 
> multi-node setup).
>
> Many thanks again!

Glad to hear that, welcome, thanks for the report!

More tests you might want to do before starting your real VMs:

- Set and later clear global maintenance from each hosts, see that this
propagates to the others (both 'hosted-engine --vm-status' and agent.log)

- Migrate the engine VM between the hosts and see this propagates

- Shutdown the engine VM without global maint and see that it's started
automatically.

But I do not think all of this is mandatory, if 'hosted-engine --vm-status'
looks ok on all hosts.

I'd still be careful with other things that might have been corrupted,
though - obviously can't tell you what/where...

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QBHWR4ULTO4ONCVZVGEXJUMR5VWSUPUX/