[ovirt-users] Re: Managed Block Storage and Templates

2021-09-28 Thread Shantur Rathore
Possibly due to https://bugzilla.redhat.com/show_bug.cgi?id=2008533

On Fri, Sep 24, 2021 at 10:35 AM Shantur Rathore
 wrote:
>
> I tried with external Ceph with cinderlib and Synology iSCSI with cinderlib 
> both as Managed block storage
>
> On Fri, 24 Sep 2021, 09:51 Gianluca Cecchi,  wrote:
>>
>> On Wed, Sep 22, 2021 at 2:30 PM Shantur Rathore  
>> wrote:
>>>
>>> Hi all,
>>>
>>> Anyone tried using Templates with Managed Block Storage?
>>> I created a VM on MBS and then took a snapshot.
>>> This worked but as soon as I created a Template from snapshot, the
>>> template got created but there is no disk attached to the template.
>>>
>>> Anyone seeing something similar?
>>>
>>> Thanks
>>>
>>
>> Are you using an external ceph cluster? Or what other cinder volume driver 
>> have you configured for the MBS storage domain?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WX7AUAMMHMLSLETYESUJFLRWX4NMWXOH/


[ovirt-users] Re: Managed Block Storage issues

2021-09-28 Thread Shantur Rathore
For 2nd Created : https://bugzilla.redhat.com/show_bug.cgi?id=2008533

I still need to test the rule

On Wed, Sep 22, 2021 at 11:59 AM Benny Zlotnik  wrote:
>
> I see the rule is created in the logs:
>
> MainProcess|jsonrpc/5::DEBUG::2021-09-22
> 10:39:37,504::supervdsm_server::95::SuperVdsm.ServerCallback::(wrapper)
> call add_managed_udev_rule with
> ('ed1a0e9f-4d30-4896-b965-534861cc0c02',
> '/dev/mapper/360014054b727813d1bc4d4cefdade7db') {}
> MainProcess|jsonrpc/5::DEBUG::2021-09-22
> 10:39:37,505::udev::124::SuperVdsm.ServerCallback::(add_managed_udev_rule)
> Creating rule 
> /etc/udev/rules.d/99-vdsm-managed_ed1a0e9f-4d30-4896-b965-534861cc0c02.rules:
> 'SYMLINK=="mapper/360014054b727813d1bc4d4cefdade7db",
> RUN+="/usr/bin/chown vdsm:qemu $env{DEVNAME}"\n'
>
> While we no longer test backends other than ceph, this used to work
> back when we started and it worked for NetApp. Perhaps this rule is
> incorrect, can you check this manually?
>
> regarding 2, can you please submit a bug?
>
> On Wed, Sep 22, 2021 at 1:03 PM Shantur Rathore
>  wrote:
> >
> > Hi all,
> >
> > I am trying to set up Managed block storage and have the following issues.
> >
> > My setup:
> > Latest oVirt Node NG : 4.4.8
> > Latest oVirt Engine : 4.4.8
> >
> > 1. Unable to copy to iSCSI based block storage
> >
> > I created a MBS with Synology UC3200 as a backend ( supported by
> > Cinderlib ). It was created fine but when I try to copy disks to it,
> > it fails.
> > Upon looking at the logs from SPM, I found "qemu-img" failed with an
> > error that it cannot open "/dev/mapper/xx" : Permission Error.
> > Had a look through the code and digging out more, I saw that
> > a. Sometimes /dev/mapper/ symlink isn't created ( log attached )
> > b. The ownership to /dev/mapper/xx and /dev/dm-xx for the new
> > device always stays at root:root
> >
> > I added a udev rule
> > ACTION=="add|change", ENV{DM_UUID}=="mpath-*", GROUP="qemu",
> > OWNER="vdsm", MODE="0660"
> >
> > and the disk copied correctly when /dev/mapper/x got created.
> >
> > 2. Copy progress finishes in UI very early than the actual qemu-img process.
> > The UI shows the Copy process is completed successfully but it's
> > actually still copying the image.
> > This happens both for ceph and iscsi based mbs.
> >
> > Is there any known workaround to get iSCSI MBS working?
> >
> > Kind regards,
> > Shantur
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6TMTW23SUAKR4UOXVSZKXHJY3PVMIDD/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5MJDMKCIVMLTR3HWJT3634YDTWYQAVOY/


[ovirt-users] Re: Managed Block Storage and Templates

2021-09-24 Thread Shantur Rathore
I tried with external Ceph with cinderlib and Synology iSCSI with cinderlib
both as Managed block storage

On Fri, 24 Sep 2021, 09:51 Gianluca Cecchi, 
wrote:

> On Wed, Sep 22, 2021 at 2:30 PM Shantur Rathore 
> wrote:
>
>> Hi all,
>>
>> Anyone tried using Templates with Managed Block Storage?
>> I created a VM on MBS and then took a snapshot.
>> This worked but as soon as I created a Template from snapshot, the
>> template got created but there is no disk attached to the template.
>>
>> Anyone seeing something similar?
>>
>> Thanks
>>
>>
> Are you using an external ceph cluster? Or what other cinder volume driver
> have you configured for the MBS storage domain?
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GAGQFFIB2FWLC4AZJ2IXF7FZV3ATMQLY/


[ovirt-users] Re: VM hanging at sustained high throughput

2021-09-23 Thread Shantur Rathore
On Thu, Sep 23, 2021 at 8:20 PM Strahil Nikolov via Users
 wrote:
>
> What happens if you define a tmpfs and then create the qemu disk ontop of 
> that ramdisk.
> Does qemu hang again ?

It works fine. Cannot reproduce the issue

>
> Best Regards,
> Strahil Nikolov
>
> On Thu, Sep 23, 2021 at 18:25, Shantur Rathore
>  wrote:
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CR5THX22KMQ4NC6USG6OEN4FJ44BZJ3O/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BOG66ASKJELWFVTWDVDTNJQCSMSDNODI/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DAWGCEUIWLZCSYNLGD2REZABDB4NWAUN/


[ovirt-users] Re: VM hanging at sustained high throughput

2021-09-23 Thread Shantur Rathore
I see the same issue in local scratch disk with scratchpad.
I think qemu should block the io rather than pausing the VM.

On Thu, Sep 23, 2021 at 4:06 PM David Johnson 
wrote:

> I replaced the SSD intent log drive with an NVME drive, and the system is
> much more stable now.
>
> *David Johnson*
> *Director of Development, Maxis Technology*
> 844.696.2947 ext 702 (o) | 479.531.3590 (c)
> 
> 
> 
>
> *Follow us:*  
>
>
> On Thu, May 27, 2021 at 5:57 PM David Johnson <
> djohn...@maxistechnology.com> wrote:
>
>> Hi ovirt gurus,
>>
>> This is an interesting issue, one I never expected to have.
>>
>> When I push high volumes of writes to my NAS, I will cause VM's to go
>> into a paused state. I'm looking at this from a number of angles, including
>> upgrades on the NAS appliance.
>>
>> I can reproduce this problem at will running a centos 7.9 VM on Ovirt 4.5.
>>
>> *Questions:*
>>
>> 1. Is my analysis of the failure (below) reasonable/correct?
>>
>> 2. What am I looking for to validate this?
>>
>> 3. Is there a configuration that I can set to make it a little more
>> robust while I acquire the hardware to improve the NAS?
>>
>>
>> *Reproduction:*
>>
>> Standard test of file write speed:
>>
>> [root@cen-79-pgsql-01 ~]# dd if=/dev/zero of=./test bs=512k count=4096
>> oflag=direct
>> 4096+0 records in
>> 4096+0 records out
>> 2147483648 bytes (2.1 GB) copied, 1.68431 s, 1.3 GB/s
>>
>>
>> Give it more data
>>
>> [root@cen-79-pgsql-01 ~]# dd if=/dev/zero of=./test bs=512k count=12228
>> oflag=direct
>> 12228+0 records in
>> 12228+0 records out
>> 6410993664 bytes (6.4 GB) copied, 7.22078 s, 888 MB/s
>>
>>
>> The odds are about 50/50 that 6 GB will kill the VM, but 100% when I hit
>> 8 GB.
>>
>> *Analysis:*
>>
>> What I think appears to be happening is that the intent cache on the NAS
>> is on an SSD, and my VM's are pushing data about three times as fast as the
>> SSD can handle. When the SSD gets queued up beyond a certain point, the NAS
>> (which places reliability over speed) says "Whoah Nellie!", and the VM
>> chokes.
>>
>>
>> *David Johnson*
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TL22ZDND4FSV7RFSKPAWYJKBEBYV6AWC/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CR5THX22KMQ4NC6USG6OEN4FJ44BZJ3O/


[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-09-23 Thread Shantur Rathore
So,
I did more digging and now I know how to reproduce it.
I created a VM and added a disk on local ssd using scratchpad hook,
formatted and mounted this scratchdisk.
Now, when I try to do heavy IO on this scratchdisk on local ssd, like,
dd if=/dev/zero of=/mnt/scratchdisk/test bs=1M count=1, qemu
pauses VM.
Debug logs in libvirt shows

2021-09-23 11:04:32.765+: 463319: debug : virThreadJobSet:94 :
Thread 463319 (rpc-worker) is now running job
remoteDispatchNodeGetFreePages
2021-09-23 11:04:32.765+: 463319: debug : virNodeGetFreePages:1614
: conn=0x7f8620018ba0, npages=3, pages=0x7f8670009960,
startCell=4294967295, cellCount=1, counts=0x7f8670007db0, flags=0x0
2021-09-23 11:04:32.765+: 463319: debug : virThreadJobClear:119 :
Thread 463319 (rpc-worker) finished job remoteDispatchNodeGetFreePages
with ret=0
2021-09-23 11:04:34.235+: 488774: debug :
qemuMonitorJSONIOProcessLine:220 : Line [{"timestamp": {"seconds":
1632395074, "microseconds": 235454}, "event": "BLOCK_IO_ERROR",
"data": {"device": "", "nospace": false, "node-name":
"libvirt-3-format", "reason": "Input/output error", "operation":
"write", "action": "stop"}}]
2021-09-23 11:04:34.235+: 488774: info :
qemuMonitorJSONIOProcessLine:235 : QEMU_MONITOR_RECV_EVENT:
mon=0x7f860c14b700 event={"timestamp": {"seconds": 1632395074,
"microseconds": 235454}, "event": "BLOCK_IO_ERROR", "data": {"device":
"", "nospace": false, "node-name": "libvirt-3-format", "reason":
"Input/output error", "operation": "write", "action": "stop"}}
2021-09-23 11:04:34.235+: 488774: debug :
qemuMonitorJSONIOProcessEvent:181 : mon=0x7f860c14b700
obj=0x7f860c0e7450
2021-09-23 11:04:34.235+: 488774: debug :
qemuMonitorEmitEvent:1166 : mon=0x7f860c14b700 event=BLOCK_IO_ERROR
2021-09-23 11:04:34.235+: 488774: debug :
qemuProcessHandleEvent:581 : vm=0x7f86201d6df0
2021-09-23 11:04:34.235+: 488774: debug : virObjectEventNew:624 :
obj=0x7f860c0d82f0
2021-09-23 11:04:34.235+: 488774: debug :
qemuMonitorJSONIOProcessEvent:206 : handle BLOCK_IO_ERROR
handler=0x7f8639c77a90 data=0x7f860c0661c0

To confirm the local ssd is fine, have enough space where scratch disk
is located and I could run dd in host without any issues.

This happens on other storages as well.
So this seems like an issue with qemu when heavy IO is happening on a disk.

On Thu, Sep 23, 2021 at 7:19 AM Tommy Sway  wrote:
>
> Another option with (still tech preview) is Managed Block Storage (Cinder 
> based storage).
>
> It still tech preview in 4.4 ??
>
>
>
>
>
>
>
> -Original Message-
> From: users-boun...@ovirt.org  On Behalf Of Nir 
> Soffer
> Sent: Wednesday, August 11, 2021 4:26 AM
> To: Shantur Rathore 
> Cc: users ; Roman Bednar 
> Subject: [ovirt-users] Re: Sparse VMs from Templates - Storage issues
>
> On Tue, Aug 10, 2021 at 4:24 PM Shantur Rathore  
> wrote:
> >
> > Hi all,
> >
> > I have a setup as detailed below
> >
> > - iSCSI Storage Domain
> > - Template with Thin QCOW2 disk
> > - Multiple VMs from Template with Thin disk
>
> Note that a single template disk used by many vms can become a performance 
> bottleneck, and is a single point of failure. Cloning the template when 
> creating vms avoids such issues.
>
> > oVirt Node 4.4.4
>
> 4.4.4 is old, you should upgrade to 4.4.7.
>
> > When the VMs boots up it downloads some data to it and that leads to 
> > increase in volume size.
> > I see that every few seconds the VM gets paused with
> >
> > "VM X has been paused due to no Storage space error."
> >
> >  and then after few seconds
> >
> > "VM X has recovered from paused back to up"
>
> This is normal operation when a vm writes too quickly and oVirt cannot extend 
> the disk quick enough. To mitigate this, you can increase the volume chunk 
> size.
>
> Created this configuration drop in file:
>
> # cat /etc/vdsm/vdsm.conf.d/99-local.conf
> [irs]
> volume_utilization_percent = 25
> volume_utilization_chunk_mb = 2048
>
> And restart vdsm.
>
> With this setting, when free space in a disk is 1.5g, the disk will be 
> extended by 2g. With the default setting, when free space is 0.5g the disk 
> was extended by 1g.
>
> If this does not eliminate the pauses, try a larger chunk size like 4096.
>
> > Sometimes after a many pause and recovery the VM dies with
> >
> > "VM X is down with error. Exit message: Lost connection with qemu process

[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-09-22 Thread Shantur Rathore
I have actually tried many types of storage now and all have this issue.

I am not of ideas what to do

On Wed, Sep 22, 2021 at 4:39 PM Shantur Rathore
 wrote:
>
> Hi Nir,
>
> Just to report.
> As suggested, I created a Posix compliant storage domain with CephFS
> and copied my templates to CephFS.
> Now I created VMs from CephFS templates and the storage error happens again.
> As I understand, the storage growth issue is only on iSCSI.
>
> Am I doing something wrong?
>
> Kind regards,
> Shantur
>
> On Wed, Aug 11, 2021 at 2:42 PM Nir Soffer  wrote:
> >
> > On Wed, Aug 11, 2021 at 4:24 PM Arik Hadas  wrote:
> > >
> > >
> > >
> > > On Wed, Aug 11, 2021 at 2:56 PM Benny Zlotnik  wrote:
> > >>
> > >> > If your vm is temporary and you like to drop the data written while
> > >> > the vm is running, you
> > >> > could use a temporary disk based on the template. This is called a
> > >> > "transient disk" in vdsm.
> > >> >
> > >> > Arik, maybe you remember how transient disks are used in engine?
> > >> > Do we have an API to run a VM once, dropping the changes to the disk
> > >> > done while the VM was running?
> > >>
> > >> I think that's how stateless VMs work
> > >
> > >
> > > +1
> > > It doesn't work exactly like Nir wrote above - stateless VMs that are 
> > > thin-provisioned would have a qcow volume on top of each template's 
> > > volume and when they starts, their active volume would be a qcow volume 
> > > on top of the aforementioned qcow volume and that active volume will be 
> > > removed when the VM goes down
> > > But yeah, stateless VMs are intended for such use case
> >
> > I was referring to transient disks - created in vdsm:
> > https://github.com/oVirt/vdsm/blob/45903d01e142047093bf844628b5d90df12b6ffb/lib/vdsm/virt/vm.py#L3789
> >
> > This creates a *local* temporary file using qcow2 format, using the
> > disk on shared
> > storage as a backing file.
> >
> > Maybe this is not used by engine?
> >
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3UEXYH2IGNDWWYEHEHKLAREJS74LMXUI/


[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-09-22 Thread Shantur Rathore
Hi Nir,

Just to report.
As suggested, I created a Posix compliant storage domain with CephFS
and copied my templates to CephFS.
Now I created VMs from CephFS templates and the storage error happens again.
As I understand, the storage growth issue is only on iSCSI.

Am I doing something wrong?

Kind regards,
Shantur

On Wed, Aug 11, 2021 at 2:42 PM Nir Soffer  wrote:
>
> On Wed, Aug 11, 2021 at 4:24 PM Arik Hadas  wrote:
> >
> >
> >
> > On Wed, Aug 11, 2021 at 2:56 PM Benny Zlotnik  wrote:
> >>
> >> > If your vm is temporary and you like to drop the data written while
> >> > the vm is running, you
> >> > could use a temporary disk based on the template. This is called a
> >> > "transient disk" in vdsm.
> >> >
> >> > Arik, maybe you remember how transient disks are used in engine?
> >> > Do we have an API to run a VM once, dropping the changes to the disk
> >> > done while the VM was running?
> >>
> >> I think that's how stateless VMs work
> >
> >
> > +1
> > It doesn't work exactly like Nir wrote above - stateless VMs that are 
> > thin-provisioned would have a qcow volume on top of each template's volume 
> > and when they starts, their active volume would be a qcow volume on top of 
> > the aforementioned qcow volume and that active volume will be removed when 
> > the VM goes down
> > But yeah, stateless VMs are intended for such use case
>
> I was referring to transient disks - created in vdsm:
> https://github.com/oVirt/vdsm/blob/45903d01e142047093bf844628b5d90df12b6ffb/lib/vdsm/virt/vm.py#L3789
>
> This creates a *local* temporary file using qcow2 format, using the
> disk on shared
> storage as a backing file.
>
> Maybe this is not used by engine?
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZEMCITVILEFHZ2R4QIVUJ26TL6LYMDRY/


[ovirt-users] Managed Block Storage and Templates

2021-09-22 Thread Shantur Rathore
Hi all,

Anyone tried using Templates with Managed Block Storage?
I created a VM on MBS and then took a snapshot.
This worked but as soon as I created a Template from snapshot, the
template got created but there is no disk attached to the template.

Anyone seeing something similar?

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6SPHZ3XOSXRYE72SWRANTXZCA27RKDY/


[ovirt-users] Managed Block Storage issues

2021-09-22 Thread Shantur Rathore
Hi all,

I am trying to set up Managed block storage and have the following issues.

My setup:
Latest oVirt Node NG : 4.4.8
Latest oVirt Engine : 4.4.8

1. Unable to copy to iSCSI based block storage

I created a MBS with Synology UC3200 as a backend ( supported by
Cinderlib ). It was created fine but when I try to copy disks to it,
it fails.
Upon looking at the logs from SPM, I found "qemu-img" failed with an
error that it cannot open "/dev/mapper/xx" : Permission Error.
Had a look through the code and digging out more, I saw that
a. Sometimes /dev/mapper/ symlink isn't created ( log attached )
b. The ownership to /dev/mapper/xx and /dev/dm-xx for the new
device always stays at root:root

I added a udev rule
ACTION=="add|change", ENV{DM_UUID}=="mpath-*", GROUP="qemu",
OWNER="vdsm", MODE="0660"

and the disk copied correctly when /dev/mapper/x got created.

2. Copy progress finishes in UI very early than the actual qemu-img process.
The UI shows the Copy process is completed successfully but it's
actually still copying the image.
This happens both for ceph and iscsi based mbs.

Is there any known workaround to get iSCSI MBS working?

Kind regards,
Shantur
2021-09-22 10:39:13,880+0100 INFO  (jsonrpc/0) [vdsm.api] START 
repoStats(domains=['a7ed0992-91f6-4236-8e18-29c5def2c845']) 
from=127.0.0.1,40072, task_id=16cf6f3d-a87f-4165-a07f-d41f6f9b9175 (api:48)
2021-09-22 10:39:13,881+0100 INFO  (jsonrpc/0) [vdsm.api] FINISH repoStats 
return={'a7ed0992-91f6-4236-8e18-29c5def2c845': {'code': 0, 'lastCheck': '4.9', 
'delay': '0.000471624', 'valid': True, 'version': 5, 'acquired': True, 
'actual': True}} from=127.0.0.1,40072, 
task_id=16cf6f3d-a87f-4165-a07f-d41f6f9b9175 (api:54)
2021-09-22 10:39:14,507+0100 INFO  (Reactor thread) 
[ProtocolDetector.AcceptorImpl] Accepted connection from 127.0.0.1:33470 
(protocoldetector:61)
2021-09-22 10:39:14,514+0100 WARN  (Reactor thread) [vds.dispatcher] unhandled 
write event (betterAsyncore:184)
2021-09-22 10:39:14,514+0100 INFO  (Reactor thread) [ProtocolDetector.Detector] 
Detected protocol stomp from 127.0.0.1:33470 (protocoldetector:125)
2021-09-22 10:39:14,515+0100 INFO  (Reactor thread) [Broker.StompAdapter] 
Processing CONNECT request (stompserver:95)
2021-09-22 10:39:14,516+0100 INFO  (JsonRpc (StompReactor)) 
[Broker.StompAdapter] Subscribe command received (stompserver:124)
2021-09-22 10:39:14,993+0100 INFO  (jsonrpc/4) [vdsm.api] START 
getSpmStatus(spUUID='0eed07c4-782d-11eb-9ca3-00163e7a233b') 
from=10.187.21.239,38252, task_id=30be11cd-e901-4db7-8bd5-cb71e8ff39ac (api:48)
2021-09-22 10:39:14,996+0100 INFO  (jsonrpc/4) [vdsm.api] FINISH getSpmStatus 
return={'spm_st': {'spmStatus': 'SPM', 'spmLver': 28, 'spmId': 8}} 
from=10.187.21.239,38252, task_id=30be11cd-e901-4db7-8bd5-cb71e8ff39ac (api:54)
2021-09-22 10:39:15,001+0100 INFO  (jsonrpc/5) [vdsm.api] START 
getStoragePoolInfo(spUUID='0eed07c4-782d-11eb-9ca3-00163e7a233b') 
from=10.187.21.239,38276, task_id=83c9e302-b093-4016-b1bd-63076e6ac5ac (api:48)
2021-09-22 10:39:15,004+0100 INFO  (jsonrpc/5) [vdsm.api] FINISH 
getStoragePoolInfo return={'info': {'domains': 
'e2627376-2254-4e90-9478-0223ef873214:Active,176ed26c-5e20-4b2d-ab10-855f519e0b0f:Active,a7ed0992-91f6-4236-8e18-29c5def2c845:Active',
 'isoprefix': '', 'lver': 28, 'master_uuid': 
'a7ed0992-91f6-4236-8e18-29c5def2c845', 'master_ver': 1, 'name': 'No 
Description', 'pool_status': 'connected', 'spm_id': 8, 'type': 'ISCSI', 
'version': '5'}, 'dominfo': {'e2627376-2254-4e90-9478-0223ef873214': {'status': 
'Active', 'alerts': [], 'isoprefix': '', 'version': 5, 'disktotal': 
'778107871232', 'diskfree': '740668747776'}, 
'176ed26c-5e20-4b2d-ab10-855f519e0b0f': {'status': 'Active', 'alerts': [], 
'isoprefix': '', 'version': 5, 'disktotal': '1610210082816', 'diskfree': 
'528817848320'}, 'a7ed0992-91f6-4236-8e18-29c5def2c845': {'status': 'Active', 
'alerts': [], 'isoprefix': '', 'version': 5, 'disktotal': '106971529216', 
'diskfree': '18119393280'}}} from=10.187.21.239,38276, 
task_id=83c9e302-b093-4016-b1bd-63076e6ac5ac (api:54)

==> /var/log/vdsm/supervdsm.log <==
MainProcess|mpathhealth::DEBUG::2021-09-22 
10:39:15,525::supervdsm_server::95::SuperVdsm.ServerCallback::(wrapper) call 
dmsetup_run_status with ('multipath',) {}
MainProcess|mpathhealth::DEBUG::2021-09-22 
10:39:15,525::commands::153::common.commands::(start) /usr/bin/taskset 
--cpu-list 0-23 /usr/sbin/dmsetup status --target multipath (cwd None)
MainProcess|mpathhealth::DEBUG::2021-09-22 
10:39:15,539::commands::98::common.commands::(run) SUCCESS:  = b'';  = 0
MainProcess|mpathhealth::DEBUG::2021-09-22 
10:39:15,540::supervdsm_server::102::SuperVdsm.ServerCallback::(wrapper) return 
dmsetup_run_status with b'36001405793765d9df7abd4849da87bdc: 0 4123000832 
multipath 2 0 1 0 2 1 A 0 1 2 8:32 A 0 0 1 E 0 1 2 8:16 A 0 0 1 
\n36001405534d4874d3b27d4d1cd86a0d0: 0 3145728000 multipath 2 0 1 0 2 1 A 0 1 2 
8:96 A 0 0 1 E 0 1 2 8:80 A 0 0 1 \n36001405534d48dbda440d4f9bd8c15df: 0 
2097

[ovirt-users] Re: Correct way to install VDSM hooks

2021-09-06 Thread Shantur Rathore
Awesome, thanks!

If someone else is looking, the commands that worked for me were

sudo yum install -y https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm

and

yum --disablerepo=* --enablerepo=ovirt-4.4 install vdsm-hook-scratchpad.noarch

as I am using 4.4

Regards,
Shantur

On Thu, Aug 26, 2021 at 9:01 AM Roman Bednar  wrote:
>
> Hello Shantur,
>
> it seems your yum repos might be just misconfigured. The easiest way to 
> configure it is to use one of the rpms provided:
>
> $sudo yum install -y 
> http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
>
> See this link for other releases (rpms) if master is not the right choice for 
> you:
> http://resources.ovirt.org/pub/yum-repo/
>
> This rpm provides yum repo configs you need for installing vdsm hooks:
>
> [root@host-vm yum.repos.d]# rpm -ql 
> ovirt-release-master-4.4.8-0.0.master.20210825011136.git59df936.el8.noarch
> /etc/yum.repos.d/ovirt-master-dependencies.repo
> /etc/yum.repos.d/ovirt-master-snapshot.repo
> /usr/share/ovirt-release-master
> /usr/share/ovirt-release-master/node-optional.repo
> /usr/share/ovirt-release-master/ovirt-el8-ppc64le-deps.repo
> /usr/share/ovirt-release-master/ovirt-el8-stream-ppc64le-deps.repo
> /usr/share/ovirt-release-master/ovirt-el8-stream-x86_64-deps.repo
> /usr/share/ovirt-release-master/ovirt-el8-x86_64-deps.repo
> /usr/share/ovirt-release-master/ovirt-el9-stream-x86_64-deps.repo
> /usr/share/ovirt-release-master/ovirt-snapshot.repo
> /usr/share/ovirt-release-master/ovirt-tested.repo
>
> [root@host-vm yum.repos.d]# cat /etc/yum.repos.d/ovirt-master-snapshot.repo
> [ovirt-master-snapshot]
> name=Latest oVirt master nightly snapshot
> #baseurl=https://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el$releasever/
> mirrorlist=https://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-master-snapshot-el$releasever
> enabled=1
> gpgcheck=0
> countme=1
> fastestmirror=1
>
> [ovirt-master-snapshot-static]
> name=Latest oVirt master additional nightly snapshot
> #baseurl=https://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/el$releasever/
> mirrorlist=https://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-master-snapshot-static-el$releasever
> enabled=1
> gpgcheck=0
> countme=1
> fastestmirror=1
>
>
> Now the hook installation should work:
>
>
> [root@host-vm yum.repos.d]# yum repolist ovirt-master-snapshot -v
> Loaded plugins: builddep, changelog, config-manager, copr, debug, 
> debuginfo-install, download, generate_completion_cache, groups-manager, 
> needs-restarting, playground, repoclosure, repodiff, repograph, repomanage, 
> reposync, uploadprofile, vdsmupgrade
> YUM version: 4.7.0
> cachedir: /var/cache/dnf
> Last metadata expiration check: 0:04:54 ago on Wed 25 Aug 2021 05:06:23 PM 
> CEST.
> Repo-id: ovirt-master-snapshot
> Repo-name  : Latest oVirt master nightly snapshot
> Repo-status: enabled
> Repo-revision  : 1629947320
> Repo-updated   : Thu 26 Aug 2021 05:08:40 AM CEST
> Repo-pkgs  : 256
> Repo-available-pkgs: 256
> Repo-size  : 12 G
> Repo-mirrors   : 
> https://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-master-snapshot-el8
> Repo-baseurl   : 
> https://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/el8/ (14 more)
> Repo-expire: 172,800 second(s) (last: Wed 25 Aug 2021 05:06:20 PM 
> CEST)
> Repo-filename  : /etc/yum.repos.d/ovirt-master-snapshot.repo
> Total packages: 256
>
> [root@host-vm yum.repos.d]# yum --disablerepo=* 
> --enablerepo=ovirt-master-snapshot install vdsm-hook-scratchpad.noarch
> Last metadata expiration check: 0:05:19 ago on Wed 25 Aug 2021 05:06:20 PM 
> CEST.
> Dependencies resolved.
> =
>  Package Architecture  Version
> RepositorySize
> =
> Installing:
>  vdsm-hook-scratchpadnoarch
> 4.40.90-1.el8  ovirt-master-snapshot
> 9.0 k
>
> Transaction Summary
> =============
> Install  1 Package
>
> Total download size: 9.0 k
> Installed size: 4.9 k
> Is this ok [y/N]:
>
>
> Let me know if you need further assistance and have a great day.
>
> On Wed, Aug 

[ovirt-users] Re: Correct way to install VDSM hooks

2021-08-25 Thread Shantur Rathore
Hi all,

just bumping if anyone missed this

Thanks

On Tue, Aug 24, 2021 at 9:29 AM Shantur Rathore 
wrote:

> Hi all,
>
> I am trying to install vdsm hooks (scratchpad) specifically.
> I can see that there are rpms available in
> https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ but when I try
> to install using yum it says it cannot find it.
> Do I have to manually select the hook rpm and install it or am I missing
> something?
>
> Kind regards,
> Shantur
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D3CQ6RNLP3BYL4QQVHW5VPSASXWUAL2Y/


[ovirt-users] Re: Resize iSCSI LUN and Storage Domain

2021-08-24 Thread Shantur Rathore
@Dhanraj.ramesh

what version of ovirt you are using? what is the storage model? try to
> perform scan by selecting the storage domain that you have resized at the
> storage side. if that is not help, add new storage domain and perform
> discovery one more time and see whether it can be discovered

I am using 4.4.5 and I tried all that but couldn't get it to work. In the
end I created another storage domain and moved all VMs to it.

@shani : Thanks for that.




On Tue, Aug 24, 2021 at 2:07 PM Shani Leviim  wrote:

> Hi Shantur,
> The "Additional Size" column mentioned was merged with the 'add' column,
> so the new column is called 'Actions' [1].
> (manage domain while the sd is active).
>
> There's also an option to refresh the LUNs size while the VM is up and
> running (available since version 4.4.5) [2].
> It also has a short demo for showing the refresh [3]
>
> [1] https://gerrit.ovirt.org/c/ovirt-engine/+/84366
> 
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1155275
> [3] https://imgur.com/a/Rjid65i
>
>
>
>
> *Regards,*
>
> *Shani Leviim*
>
>
> On Tue, Aug 24, 2021 at 12:59 PM dhanaraj.ramesh--- via Users <
> users@ovirt.org> wrote:
>
>> Hi Shantur
>>
>> what version of ovirt you are using? what is the storage model? try to
>> perform scan by selecting the storage domain that you have resized at the
>> storage side. if that is not help, add new storage domain and perform
>> discovery one more time and see whether it can be discovered
>>
>> extending the storage domain size can be done in fly, no maintenance
>> required.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NRY4C66C6PD4TPF3TA3T2UC6LSW67HVY/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDEESZFB5GS2WSLHTKU6PIADNQSQJROH/


[ovirt-users] Correct way to install VDSM hooks

2021-08-24 Thread Shantur Rathore
Hi all,

I am trying to install vdsm hooks (scratchpad) specifically.
I can see that there are rpms available in
https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ but when I try to
install using yum it says it cannot find it.
Do I have to manually select the hook rpm and install it or am I missing
something?

Kind regards,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CMHNGAFGANDW5UVY5K7RIOWHGLEZZ56W/


[ovirt-users] Faster local disk for VM

2021-08-20 Thread Shantur Rathore
Hi Users,

Does oVirt support node local disk cache or scratch / transient disk for
faster disk access?
The idea is similar to options we get on cloud where one can attach fast
node local scratch disk/ssd to the VM which gets deleted when VM dies.

Thanks in advance.

Kind Regards,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7HBUUJES5P46O6CLHPM3ORAPTDGU5FF/


[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-11 Thread Shantur Rathore
> Yes, on file based storage a snapshot is a file, and it grows as
> needed.  On block based
> storage, a snapshot is a logical volume, and oVirt needs to extend it
> when needed.


Forgive my ignorance, coming from vSphere background where a filesystem was
created on iSCSI LUN.
I take that this isn't the case in case of a iSCSI Storage Domain in oVirt.

On Wed, Aug 11, 2021 at 12:26 PM Nir Soffer  wrote:

> On Wed, Aug 11, 2021 at 12:43 AM Shantur Rathore
>  wrote:
> >
> > Thanks for the detailed response Nir.
> >
> > In my use case, we keep creating VMs from templates and deleting them so
> we need the VMs to be created quickly and cloning it will use a lot of time
> and storage.
>
> That's a good reason to use a template.
>
> If your vm is temporary and you like to drop the data written while
> the vm is running, you
> could use a temporary disk based on the template. This is called a
> "transient disk" in vdsm.
>
> Arik, maybe you remember how transient disks are used in engine?
> Do we have an API to run a VM once, dropping the changes to the disk
> done while the VM was running?
>
> > I will try to add the config and try again tomorrow. Also I like the
> Managed Block storage idea, I had read about it in the past and used it
> with Ceph.
> >
> > Just to understand it better, is this issue only on iSCSI based storage?
>
> Yes, on file based storage a snapshot is a file, and it grows as
> needed.  On block based
> storage, a snapshot is a logical volume, and oVirt needs to extend it
> when needed.
>
> Nir
>
> > Thanks again.
> >
> > Regards
> > Shantur
> >
> > On Tue, Aug 10, 2021 at 9:26 PM Nir Soffer  wrote:
> >>
> >> On Tue, Aug 10, 2021 at 4:24 PM Shantur Rathore
> >>  wrote:
> >> >
> >> > Hi all,
> >> >
> >> > I have a setup as detailed below
> >> >
> >> > - iSCSI Storage Domain
> >> > - Template with Thin QCOW2 disk
> >> > - Multiple VMs from Template with Thin disk
> >>
> >> Note that a single template disk used by many vms can become a
> performance
> >> bottleneck, and is a single point of failure. Cloning the template when
> creating
> >> vms avoids such issues.
> >>
> >> > oVirt Node 4.4.4
> >>
> >> 4.4.4 is old, you should upgrade to 4.4.7.
> >>
> >> > When the VMs boots up it downloads some data to it and that leads to
> increase in volume size.
> >> > I see that every few seconds the VM gets paused with
> >> >
> >> > "VM X has been paused due to no Storage space error."
> >> >
> >> >  and then after few seconds
> >> >
> >> > "VM X has recovered from paused back to up"
> >>
> >> This is normal operation when a vm writes too quickly and oVirt cannot
> >> extend the disk quick enough. To mitigate this, you can increase the
> >> volume chunk size.
> >>
> >> Created this configuration drop in file:
> >>
> >> # cat /etc/vdsm/vdsm.conf.d/99-local.conf
> >> [irs]
> >> volume_utilization_percent = 25
> >> volume_utilization_chunk_mb = 2048
> >>
> >> And restart vdsm.
> >>
> >> With this setting, when free space in a disk is 1.5g, the disk will
> >> be extended by 2g. With the default setting, when free space is
> >> 0.5g the disk was extended by 1g.
> >>
> >> If this does not eliminate the pauses, try a larger chunk size
> >> like 4096.
> >>
> >> > Sometimes after a many pause and recovery the VM dies with
> >> >
> >> > "VM X is down with error. Exit message: Lost connection with qemu
> process."
> >>
> >> This means qemu has crashed. You can find more info in the vm log at:
> >> /var/log/libvirt/qemu/vm-name.log
> >>
> >> We know about bugs in qemu that cause such crashes when vm disk is
> >> extended. I think the latest bug was fixed in 4.4.6, so upgrading to
> 4.4.7
> >> will fix this issue.
> >>
> >> Even with these settings, if you have a very bursty io in the vm, it may
> >> become paused. The only way to completely avoid these pauses is to
> >> use a preallocated disk, or use file storage (e.g. NFS). Preallocated
> disk
> >> can be thin provisioned on the server side so it does not mean you need
> >> more storage, but you will not be able to use shared templates in the
> way
> >> you use th

[ovirt-users] Re: Sparse VMs from Templates - Storage issues

2021-08-10 Thread Shantur Rathore
Thanks for the detailed response Nir.

In my use case, we keep creating VMs from templates and deleting them so
we need the VMs to be created quickly and cloning it will use a lot of time
and storage.
I will try to add the config and try again tomorrow. Also I like the
Managed Block storage idea, I had read about it in the past and used it
with Ceph.

Just to understand it better, is this issue only on iSCSI based storage?

Thanks again.

Regards
Shantur

On Tue, Aug 10, 2021 at 9:26 PM Nir Soffer  wrote:

> On Tue, Aug 10, 2021 at 4:24 PM Shantur Rathore
>  wrote:
> >
> > Hi all,
> >
> > I have a setup as detailed below
> >
> > - iSCSI Storage Domain
> > - Template with Thin QCOW2 disk
> > - Multiple VMs from Template with Thin disk
>
> Note that a single template disk used by many vms can become a performance
> bottleneck, and is a single point of failure. Cloning the template when
> creating
> vms avoids such issues.
>
> > oVirt Node 4.4.4
>
> 4.4.4 is old, you should upgrade to 4.4.7.
>
> > When the VMs boots up it downloads some data to it and that leads to
> increase in volume size.
> > I see that every few seconds the VM gets paused with
> >
> > "VM X has been paused due to no Storage space error."
> >
> >  and then after few seconds
> >
> > "VM X has recovered from paused back to up"
>
> This is normal operation when a vm writes too quickly and oVirt cannot
> extend the disk quick enough. To mitigate this, you can increase the
> volume chunk size.
>
> Created this configuration drop in file:
>
> # cat /etc/vdsm/vdsm.conf.d/99-local.conf
> [irs]
> volume_utilization_percent = 25
> volume_utilization_chunk_mb = 2048
>
> And restart vdsm.
>
> With this setting, when free space in a disk is 1.5g, the disk will
> be extended by 2g. With the default setting, when free space is
> 0.5g the disk was extended by 1g.
>
> If this does not eliminate the pauses, try a larger chunk size
> like 4096.
>
> > Sometimes after a many pause and recovery the VM dies with
> >
> > "VM X is down with error. Exit message: Lost connection with qemu
> process."
>
> This means qemu has crashed. You can find more info in the vm log at:
> /var/log/libvirt/qemu/vm-name.log
>
> We know about bugs in qemu that cause such crashes when vm disk is
> extended. I think the latest bug was fixed in 4.4.6, so upgrading to 4.4.7
> will fix this issue.
>
> Even with these settings, if you have a very bursty io in the vm, it may
> become paused. The only way to completely avoid these pauses is to
> use a preallocated disk, or use file storage (e.g. NFS). Preallocated disk
> can be thin provisioned on the server side so it does not mean you need
> more storage, but you will not be able to use shared templates in the way
> you use them now. You can create vm from template, but the template
> is cloned to the new vm.
>
> Another option with (still tech preview) is Managed Block Storage (Cinder
> based storage). If your storage server is supported by Cinder, we can
> managed it using cinderlib. In this setup every disk is a LUN, which may
> be thin provisioned on the storage server. This can also offload storage
> operations to the server, like cloning disks, which may be much faster and
> more efficient.
>
> Nir
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4RRRKKOFSNWYMQWAMVR5VJ2WA2BBG2F5/


[ovirt-users] Sparse VMs from Templates - Storage issues

2021-08-10 Thread Shantur Rathore
Hi all,

I have a setup as detailed below

- iSCSI Storage Domain
- Template with Thin QCOW2 disk
- Multiple VMs from Template with Thin disk
oVirt Node 4.4.4

When the VMs boots up it downloads some data to it and that leads to
increase in volume size.
I see that every few seconds the VM gets paused with

"VM X has been paused due to no Storage space error."

 and then after few seconds

"VM X has recovered from paused back to up"

Sometimes after a many pause and recovery the VM dies with

"VM X is down with error. Exit message: Lost connection with qemu process."

and I have to restart the VMs.

My questions.

1. How to work around this dying VM?
2. Is there a way to use sparse disks without VM being paused again and
again?


Thanks in advance.

Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RDDDFIJ6L2OEVBGUIDQLVCBJFLFRTEYO/


[ovirt-users] Re: Resize iSCSI LUN and Storage Domain

2021-08-10 Thread Shantur Rathore
Thanks guys,

@Nir
I don't get the option to resize the PV. I am not sure where to look to see
why it can't see the size change.
I have tried to manage the domain in and out of maintenance mode, added
another Lun and removed it but no help.

Any ideas?

On Tue, Aug 10, 2021 at 1:53 PM Jason Alexander Hazen Valliant-Saunders <
haze...@altignus.com> wrote:

> When increasing the f.s. since on any LUN you need to also extend the file
> system.
>
> You will not see the extra space until you resize your file system and
> extend it to accommodate the new space.
>
> On Mon., Aug. 9, 2021, 2:36 p.m. Nir Soffer,  wrote:
>
>> On Mon, Aug 9, 2021 at 7:32 PM Shantur Rathore
>>  wrote:
>> >
>> > Hi all,
>> >
>> > I have an iSCSI Storage Domain for VMs and need to increase storage.
>> > To do this, I increased the size of LUN on iscsi storage server and
>> tried to refresh the size of Storage Domain as per
>> >
>> >
>> https://www.ovirt.org/documentation/administration_guide/index.html#Increasing_iSCSI_or_FCP_Storage
>> >
>> > "Refreshing the LUN Size" section
>> >
>> > I cannot see the "Additional Storage Size" column or any other option
>> to refresh the LUN size.
>>
>> Since you have a storage domain using this LUN, you need to follow the
>> instructions
>> for "Increasing an Existing iSCSI or FCP Storage Domain".
>>
>> The "Refreshing the LUN Size" section is for increating a LUN which is
>> not part of
>> the storage domain.
>>
>> When you open the "Manage domain" dialog, oVirt performs SCSI rescan
>> which should discover the new size of the LUN. Then oVirt check if the
>> multipath map matches the size of the LUN, and if not it will resize the
>> map
>> to fit the LUN.
>>
>> Finally it shows a button to resize the PV on top of the multipath device.
>> When you select this button and confirm, oVirt will resize the PV,
>> which will resize
>> the VG.
>>
>> Nir
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JGY3QU6EBRMORU6VQ6WPRSCKMM4QPQEJ/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MZRXUFCEAMXKTU4FLRNMGEBYZP2XQWW5/


[ovirt-users] Resize iSCSI LUN and Storage Domain

2021-08-09 Thread Shantur Rathore
Hi all,

I have an iSCSI Storage Domain for VMs and need to increase storage.
To do this, I increased the size of LUN on iscsi storage server and tried
to refresh the size of Storage Domain as per

https://www.ovirt.org/documentation/administration_guide/index.html#Increasing_iSCSI_or_FCP_Storage

"*Refreshing the LUN Size" section*

I cannot see the "Additional Storage Size" column or any other option to
refresh the LUN size.

What am I missing here?

Kind regards,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F6QTBHDXONSKNVW4PXDBYJ4FUQRYM6PM/


[ovirt-users] Re: Storage Domains for a VM from Template

2021-08-09 Thread Shantur Rathore
Thanks Arik,

I am interested in the Thin approach.
Is it not possible to create it on another storage domain ? Even with API ?

I read here http://ovirt.github.io/ovirt-engine-api-model/4.4/#services/vms

*When creating a virtual machine from a template or from a snapshot it is
usually useful to explicitly indicate in what storage domain to create the
disks for the virtual machine. If the virtual machine is created from a
template then this is achieved passing a set of disk_attachment elements
that indicate the mapping:*

Thanks,
Shantur

On Mon, Aug 9, 2021 at 11:19 AM Arik Hadas  wrote:

>
>
> On Mon, Aug 9, 2021 at 12:55 PM Shantur Rathore 
> wrote:
>
>> Hi all,
>>
>> I have multiple storage domains in my DC. One is backed up that has
>> templates and another one which isn't backed up and I want to use it for VM
>> instances of the template.
>>
>> I cannot find an option to create the VM from a template and use a
>> storage domain other than where the template is stored.
>>
>> Is it even possible?
>>
>
> Assuming you're asking about the administration portal, it depends on the
> selected allocation policy (which you can find in the Resource Allocation
> tab of the add-VM dialog):
> - If it is set to 'Clone' then you can select any other storage domain
> that is available in the DC to clone the template's disk to
> - If it is set to 'Thin' then a VM disk must reside on a storage domain
> that contains the template's disk it is based on. You can copy the
> template's disk(s) to another storage domain if you want a VM disk to be
> created there
>
>
>>
>> Thanks,
>> Shantur
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SKR6URQN7IH5OYI4MEIVEY2QZZSX2XTY/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQNL5AIQ2QMCLERXSGPQBMFTAMBYEHVT/


[ovirt-users] Storage Domains for a VM from Template

2021-08-09 Thread Shantur Rathore
Hi all,

I have multiple storage domains in my DC. One is backed up that has
templates and another one which isn't backed up and I want to use it for VM
instances of the template.

I cannot find an option to create the VM from a template and use a storage
domain other than where the template is stored.

Is it even possible?

Thanks,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SKR6URQN7IH5OYI4MEIVEY2QZZSX2XTY/


[ovirt-users] Re: Reset NVRAM

2021-03-15 Thread Shantur Rathore
Thanks Tomáš,

Actually I hit "Send" too quickly. Are you experiencing the issue after
> VM reboot or even after shutdown? Because NVRAM is kept between reboots
> and is removed only after VM shutdown.


It happens after shutdown and reboot. The VM just goes into a weird state
with "Guest hasn't initialised the display (yet)" error.
If I create another VM with the same configuration and use the same boot
disk it works fine.

So, if it's not NVRAM, I am not sure what could it be which stays with VM
but is not a disk.

Thanks,
shantur

On Mon, Mar 15, 2021 at 3:47 PM Tomáš Golembiovský 
wrote:

> On Mon, Mar 15, 2021 at 04:43:20PM +0100, Tomáš Golembiovský wrote:
> > On Mon, Mar 15, 2021 at 01:13:10PM +, Shantur Rathore wrote:
> > > Hi all,
> > >
> > > I am doing some testing with GPU passthrough to VMs and in some cases
> the
> > > OVMF stops booting after a GPU passthrough.
> > > I have figured out that it's related to some state stored in NVRAM by
> OVMF
> > > and as soon as I create another VM with the same disks it starts
> booting up.
> > >
> > > I believe oVirt saves the NVRAM state.
> >
> > No it isn't. Or, better said, it wasn't. Storing of NVRAM is a new
> > feature in 4.4.5 and it will be enabled only for VMs with Secure Boot.
> >
> > So whatever you're experiencing is probably caused by something else.
>
> Actually I hit "Send" too quickly. Are you experiencing the issue after
> VM reboot or even after shutdown? Because NVRAM is kept between reboots
> and is removed only after VM shutdown.
>
> Tomas
>
> >
> > >  Is there a way to clear NVRAM for a VM without deleting it?
> >
> > This is not part of the feature introduced in 4.4.5. But it would make
> > sense to have it if we later extend the NVRAM storing to all VMs with
> > UEFI. So despite what I wrote above it would be worth opening a feature
> > request bug.
> >
> > Tomas
> >
> > >
> > > Regards,
> > > Shantur
> >
> > > ___
> > > Users mailing list -- users@ovirt.org
> > > To unsubscribe send an email to users-le...@ovirt.org
> > > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GJ4AQHJ6SIAT7ML526SSO7IKRTY2BHCQ/
> >
> >
> > --
> > Tomáš Golembiovský 
>
> --
> Tomáš Golembiovský 
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ODXSQZ3EUAK75MA2OPHDIMTOMKNJJKGJ/


[ovirt-users] Reset NVRAM

2021-03-15 Thread Shantur Rathore
Hi all,

I am doing some testing with GPU passthrough to VMs and in some cases the
OVMF stops booting after a GPU passthrough.
I have figured out that it's related to some state stored in NVRAM by OVMF
and as soon as I create another VM with the same disks it starts booting up.

I believe oVirt saves the NVRAM state.
 Is there a way to clear NVRAM for a VM without deleting it?

Regards,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GJ4AQHJ6SIAT7ML526SSO7IKRTY2BHCQ/


[ovirt-users] Re: Cloud-init confusion

2021-03-04 Thread Shantur Rathore
Thanks Liran,

It clears up the confusion.

Regards,
Shantur

On Wed, Mar 3, 2021 at 1:04 PM Liran Rotenberg  wrote:

> Hi,
> Cloud-init is usually used only once to initialize the guest OS
> configuration.
> When using the normal Run, if it's the first time this VM goes up, the
> cloud-init will be included, otherwise it won't as it isn't supposed to be
> re-configured or being configured after the VM already ran.
> However, Run-Once bypasses this safety check and allows it, making the VM
> as a "fresh one".
> I hope it clears things up a bit.
> Liran.
>
> On Wed, Mar 3, 2021 at 12:46 PM Shantur Rathore 
> wrote:
>
>> Hi Users,
>>
>> I am new to oVirt and having some confusion with cloud-init.
>> I have a VM which supports Cloud Init and when I enable Cloud Init under
>> Initial boot settings of VM, the cloud-init config disk isn't attached to
>> the VM.
>> If I use Run-Once and enable Cloud-Init the config disk is attached to
>> the VM and works as expected.
>>
>> Is cloud-init only supposed to work with Run-Once or am I missing
>> something?
>>
>> Thanks in advance.
>> Shantur
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YTYS7GH4HNIGB5RXGR6RJXBJPQHLFBB6/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5NUYJLPPQUPVHFURHJCFNSC5P2QFBFBL/


[ovirt-users] Template backing disks on different storage

2021-03-04 Thread Shantur Rathore
Hi Users,

I am evaluating oVirt coming from a VMware background.
I have a test cluster with 6 machines running oVirt Node providing
compute + gluster services.
There are 2 storage domains

1. Gluster backed
2. iSCSI SAN

I use iSCSI SAN for the templates (backing disks) and would like to use
Gluster storage domain for template instances.
In VMware I could create a linked clone on one storage linked to a template
on another.

Is this possible with oVirt? via GUI or API?

Thanks in advance,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CNYHNSLUJXJQVI4L52PNKJMRY2M6O3TX/


[ovirt-users] Cloud-init confusion

2021-03-03 Thread Shantur Rathore
Hi Users,

I am new to oVirt and having some confusion with cloud-init.
I have a VM which supports Cloud Init and when I enable Cloud Init under
Initial boot settings of VM, the cloud-init config disk isn't attached to
the VM.
If I use Run-Once and enable Cloud-Init the config disk is attached to the
VM and works as expected.

Is cloud-init only supposed to work with Run-Once or am I missing something?

Thanks in advance.
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YTYS7GH4HNIGB5RXGR6RJXBJPQHLFBB6/


[ovirt-users] Re: Newer kernel for oVirt Node NG

2021-02-10 Thread Shantur Rathore
Tried it again, it just breaks the boot.

After a lot of firefighting found that elrepo kernel-lt doesn't have `xfs`
in initramfs image and it fails mounting the lvm which uses xfs

# lsinitrd /boot/initramfs-4.18.0-240.1.1.el8_3.x86_64.img | grep xfs.ko
-rw-r--r--   1 root root   442760 Aug 11  2020
usr/lib/modules/4.18.0-240.1.1.el8_3.x86_64/kernel/fs/xfs/xfs.ko.xz

Tested kernel-ml and it has the needed xfs module

After installing the new kernel, I needed to copy over the "options" line
from /boot/loader/entries/ovirt*.conf to the new one.

Hope this saves a lot of hair pulling for someone.

On Tue, Feb 9, 2021 at 7:23 PM matthew.st...@fujitsu.com <
matthew.st...@fujitsu.com> wrote:

> ‘Node’ images are just minimal CentOS plus all the packages for this
> release, in a sub-DVD sized ISO.  Once it is installed, the operating
> system is still CentOS, and can be patched/modified, including installing
> alternative kernels.
>
>
>
> If it fails, boot back to the original kernel, and remove the package from
> Elrepo, or at worse, re-install.
>
>
>
> *From:* Shantur Rathore 
> *Sent:* Tuesday, February 9, 2021 11:11 AM
> *To:* Stier, Matthew ; users 
> *Subject:* Re: [ovirt-users] Newer kernel for oVirt Node NG
>
>
>
> Thanks Matthew,
>
>
>
> Are you sure about this?
>
> I thought oVirt Node NG images are immutable
>
>
>
> On Tue, Feb 9, 2021 at 3:42 PM matthew.st...@fujitsu.com <
> matthew.st...@fujitsu.com> wrote:
>
> Elrepo.org
>
>
>
> *From:* Shantur Rathore 
> *Sent:* Tuesday, February 9, 2021 9:28 AM
> *To:* users 
> *Subject:* [ovirt-users] Newer kernel for oVirt Node NG
>
>
>
> Hi oVirt Users,
>
>
>
> I am trying to test some vfio related stuff on oVirt Node NG 4.4.4 based
> host.
>
> What would be the easiest way to have a 5.x kernel on this node?
>
> I don't mind compiling if it needs to.
>
>
>
> Cheers,
>
> Shantur
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/42RWK45VHFNOQY3JQJRNMJKMAERZNYEP/


[ovirt-users] Re: Newer kernel for oVirt Node NG

2021-02-09 Thread Shantur Rathore
Just gave it a try and it breaks boot as its unable to find root

On Tue, Feb 9, 2021 at 5:11 PM Shantur Rathore  wrote:

> Thanks Matthew,
>
> Are you sure about this?
> I thought oVirt Node NG images are immutable
>
> On Tue, Feb 9, 2021 at 3:42 PM matthew.st...@fujitsu.com <
> matthew.st...@fujitsu.com> wrote:
>
>> Elrepo.org
>>
>>
>>
>> *From:* Shantur Rathore 
>> *Sent:* Tuesday, February 9, 2021 9:28 AM
>> *To:* users 
>> *Subject:* [ovirt-users] Newer kernel for oVirt Node NG
>>
>>
>>
>> Hi oVirt Users,
>>
>>
>>
>> I am trying to test some vfio related stuff on oVirt Node NG 4.4.4 based
>> host.
>>
>> What would be the easiest way to have a 5.x kernel on this node?
>>
>> I don't mind compiling if it needs to.
>>
>>
>>
>> Cheers,
>>
>> Shantur
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DAAFCC2XDA5UGPLSVALLGRQ2RXNCXXSS/


[ovirt-users] Re: Newer kernel for oVirt Node NG

2021-02-09 Thread Shantur Rathore
Thanks Matthew,

Are you sure about this?
I thought oVirt Node NG images are immutable

On Tue, Feb 9, 2021 at 3:42 PM matthew.st...@fujitsu.com <
matthew.st...@fujitsu.com> wrote:

> Elrepo.org
>
>
>
> *From:* Shantur Rathore 
> *Sent:* Tuesday, February 9, 2021 9:28 AM
> *To:* users 
> *Subject:* [ovirt-users] Newer kernel for oVirt Node NG
>
>
>
> Hi oVirt Users,
>
>
>
> I am trying to test some vfio related stuff on oVirt Node NG 4.4.4 based
> host.
>
> What would be the easiest way to have a 5.x kernel on this node?
>
> I don't mind compiling if it needs to.
>
>
>
> Cheers,
>
> Shantur
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PD6GPBELZ23UTYEUOVCPAOTH4TPT3WZB/


[ovirt-users] Newer kernel for oVirt Node NG

2021-02-09 Thread Shantur Rathore
Hi oVirt Users,

I am trying to test some vfio related stuff on oVirt Node NG 4.4.4 based
host.
What would be the easiest way to have a 5.x kernel on this node?
I don't mind compiling if it needs to.

Cheers,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WGWGRMECEV7X2QIK2TRE5RZLRLPFQCNX/


[ovirt-users] Re: Changes between Node NG 4.4.3 and 4.4.4

2021-02-03 Thread Shantur Rathore
A bit more on this.

I figured out that 4.4.3 is based on Centos 8.2 and 4.4.4 is based on
Centos 8.3
Looks like the newer kernel is having
https://bugzilla.kernel.org/show_bug.cgi?id=207489 issue.

Not sure if it's updated in the latest 4.4.5-Pre release.

Regards
Shantur

On Wed, Feb 3, 2021 at 12:17 PM Shantur Rathore  wrote:

> Hi,
>
> I have a node 4.4.3 install and it works perfectly for PCI passthrough.
> After I upgraded to Node NG 4.4.4 the PCI passthrough leads to server
> resets. There are no logs in the kernel messages just before reset apart
> from vfio-pci enabling the device.
> So, I assume there is some change related to the kernel or vfio-pci driver
> in the newer 4.4.4 version.
>
> How / Where can I see the differences between Node NG releases?
>
> Thanks,
> Shantur
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZSV3T4ONNKG5LTGY34XB65JKN4OQK56R/


[ovirt-users] Changes between Node NG 4.4.3 and 4.4.4

2021-02-03 Thread Shantur Rathore
Hi,

I have a node 4.4.3 install and it works perfectly for PCI passthrough.
After I upgraded to Node NG 4.4.4 the PCI passthrough leads to server
resets. There are no logs in the kernel messages just before reset apart
from vfio-pci enabling the device.
So, I assume there is some change related to the kernel or vfio-pci driver
in the newer 4.4.4 version.

How / Where can I see the differences between Node NG releases?

Thanks,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XNOQPOWUBAYTXB3L6QOSGBG43BZ5PCR5/


[ovirt-users] NodeNG persistence for custom vdsm hooks and firmware

2021-02-03 Thread Shantur Rathore
Hi all,

I have NodeNG 4.4.4 installed and want to know what is the best way of
persisting custom vdsm hooks and some firmware binaries across updates.
I tried to update to 4.4.5-pre and lost my hooks and firmware.

Thanks,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WPY5JP4BIUWH5QAR7M7L7MLY4XI2VMWH/


[ovirt-users] Re: vGPU on ovirt 4.3

2021-01-26 Thread Shantur Rathore
>
> I've been able to trick consumer GPUs into working with KVM on machines
> that I also use for oVirt, but oVirt itself is constructing the XML files
> for KVM on-the-fly, so you'd have to fiddle with the code that builds them:
> I haven't even found that yet.


You could use qemu_cmdline VDSM hook to provide custom qemu command line
params for your VM or you could write your own hook to change libvirt xml
before the VM is powered on.

On Mon, Jan 25, 2021 at 7:03 PM Alex McWhirter  wrote:

> IIRC oVirt 4.3 should have the basic hooks in place for mdev
> passthrough. For nvidia this mean you need the vgpu drivers and a
> license server. These licenses have a recurring cost.
>
> AMD's solution uses SR-IOV, and requires a custom kernel module that is
> not well tested YMMV.
>
> You can also passthrough entire cards in ovirt without any drivers,
> granted since the amount of GPU's you can stuff into a server is limited
> this is probably not ideal. Nvidia blocks this on consumer level cards.
>
> Intel also has a mdev solution that is built into mesa / the kernel
> already, we have used it with VCA cards in the past, but perhaps the new
> Intel GPU's support it as well? Intel is the only option that will allow
> you to feed the framebuffer data back into spice so you can use the
> ovirt console. All other options require you to create a remote session
> to the guest directly.
>
> On 2021-01-25 07:49, kim.karga...@noroff.no wrote:
> > Hi,
> >
> > We are looking at getting some GPU's for our servers and to use vGPU
> > passthrough so that our students can do some video renders on the
> > VM's. Does anyone have good experience with the Nvidia Quadro RTX6000
> > or RTX8000 and ovirt 4.3?
> >
> > Thanks.
> >
> > Kim
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XF4LZOEAC2YTRH5LTL55YKA4BPRZLVZH/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PY4ITOIWACUWHYLAVGOMMWIXUZYNDH5L/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HLC5IMZUXQAKMKVZHGJGTL7TPU5EBLOM/


[ovirt-users] Re: oVirt initramfs regeneration

2021-01-26 Thread Shantur Rathore
Have a look at this kickstart file for installing oVirt Node
https://gist.github.com/jonathanelbailey/b2ba868d39491f4e6a7497010b0f6b18

On Mon, Jan 25, 2021 at 11:49 PM Robert Tongue 
wrote:

> Is it possible to force an initramfs regeneration in oVirt Node 4.4.4? I
> am doing some advanced partitioning and cannot seem to figure out how to
> properly do that if it's even possible. Thanks.
>
> -phunyguy
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7HE7Z6XYJP566PV3BSW225U4UCYE5FW3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DYFDSGXU5LEVDVTHMCVADLQLQZIAZWPQ/


[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Shantur Rathore
Thanks Matthias,

Ceph iSCSI is indeed supported but it introduces an overhead for running
LIO gateways for iSCSI.
CephFS works as a posix domain, if we could get a posix domain to work as a
master domain then we could run a self-hosted engine on it.
Ceph RBD ( rbd-nbd hopefully in future ) could be used with cinderlib and
we have got a self-hosted infrastructure with Ceph.

I am hopeful that when cinderlib integration is mature enough to be out of
Tech preview, there will be a way to migrate old cinder disks to new
cinderlib.

PS: About your large deployment, go OpenStack or OpenNebula if you like.
Proxmox clustering isn't very great, it doesn't have a single controller
and uses coro-sync based clustering which isn't very great.

Cheers,
Shantur

On Fri, Jan 22, 2021 at 10:36 AM Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> I can confirm that Ceph iSCSI can be used for master domain, we are
> using it together with VM disks on Ceph via Cinder ("old style"). Recent
> developments concerning Ceph in oVirt are disappointing for me, I think
> I will have to look elsewhere (OpenStack, Proxmox) for our rather big
> deployment. At least Nir Soffer's explanation for the move to cinderlib
> in another thread (dated 20210121) shed some light on the background of
> this decision.
>
> Matthias
>
> Am 19.01.21 um 12:57 schrieb Gianluca Cecchi:
> > On Tue, Jan 19, 2021 at 12:20 PM Benny Zlotnik  > > wrote:
> >
> >  >Thanks for pointing out the requirement for Master domain. In
> > theory, will I be able to satisfy the requirement with another iSCSI
> > or >maybe Ceph iSCSI as master domain?
> > It should work as ovirt sees it as a regular domain, cephFS will
> > probably work too
> >
> >
> > Ceph iSCSI gateway should be supported since 4.1, so I think I can use
> > it for configuring the master domain and still leveraging the same
> > overall storage environment provided by Ceph, correct?
> >
> > https://bugzilla.redhat.com/show_bug.cgi?id=1527061
> >
> > Gianluca
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ASTNEGXSV7I4NIOG5RVZKDWIPQCEPMU/
> >
>
> --
> Matthias Leopold
> IT Systems & Communications
> Medizinische Universität Wien
> Spitalgasse 23 / BT 88 / Ebene 00
> A-1090 Wien
> Tel: +43 1 40160-21241
> Fax: +43 1 40160-921200
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JNLF5QBPVMFNQXCD5J7RCPJKIZ4WOJ76/


[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Shantur Rathore
Thanks Konstantin.

I do get that oVirt needs a master domain.
Just want to make a POSIX domain as a master domain. I can see there is no
option in UI for that but do not understand if it is incompatible or not
implemented.
If it is not implemented then there might be a possibility of creating one
with manual steps.

Thanks

On Fri, Jan 22, 2021 at 10:21 AM Konstantin Shalygin  wrote:

> Shantur, this is oVirt. You always should make master domain. It’s enough
> some 1GB NFS on manager side.
>
>
> k
>
> On 22 Jan 2021, at 12:02, Shantur Rathore  wrote:
>
> Just a bump. Any ideas anyone?
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2S5MPA3CWH6YTPAIWZE5GLCBIP7ZQLJ5/


[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Shantur Rathore
Just a bump. Any ideas anyone?

On Wed, Jan 20, 2021 at 4:13 PM Shantur Rathore  wrote:

> So,
> after a quick dive into source code, I cannot see any mention of posix
> storage in hosted-engine code.
> I am not sure if there is a manual way of moving the locally created
> hosted-engine vm to POSIX storage and create a storage domain using API as
> it does for other types of domains while installing self-hosted engine.
>
> Regards,
> Shantur
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LZNM3TMKMJPFR22SKIUBN32EK4FS5GCH/


[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2021-01-21 Thread Shantur Rathore
I would love
https://github.com/openstack/cinderlib/commit/a09a7e12fe685d747ed390a59cd42d0acd1399e4
to come back.



On Thu, Jan 21, 2021 at 2:27 PM Gorka Eguileor  wrote:

> On 21/01, Nir Soffer wrote:
> > On Thu, Jan 21, 2021 at 8:50 AM Konstantin Shalygin 
> wrote:
> > >
> > > I understood, more than the code that works with qemu already exists
> for openstack integration
> >
> > We have code on vdsm and engine to support librbd, but using in cinderlib
> > based volume is not a trivial change.
> >
> > On engine side, this means changing the flow, so instead of attaching
> > a device to a host, engine will configure the xml with network disk,
> using
> > the rbd url, same way as old cinder support was using.
> >
> > To make this work, engine needs to configure the ceph authentication
> > secrets on all hosts in the DC. We have code to do this for old cinder
> storage
> > doman, but it is not used for new cinderlib setup. I'm not sure how easy
> is to
> > use the same mechanism for cinderlib.
>
> Hi,
>
> All the data is in the connection info (including the keyring), so it
> should be possible to implement.
>
> >
> > Generally, we don't want to spend time on special code for ceph, and
> prefer
> > to outsource this to os brick and the kernel, so we have a uniform way to
> > use volumes. But if the special code gives important benefits, we can
> > consider it.
> >
>
> I think think that's reasonable. Having less code to worry about and
> making the project's code base more readable and maintainable is a
> considerable benefit that should not be underestimated.
>
>
> > I think openshift virtualization is using the same solution (kernel
> based rbd)
> > for ceph. An important requirement for us is having an easy way to
> migrate
> > vms from ovirt to openshift virtuations. Using the same ceph
> configuration
> > can make this migration easier.
> >
>
> The Ceph CSI plugin seems to have the possibility of using krbd and
> rbd-nbd [1], but that's something we can also achieve in oVirt by adding
> back the rbd-nbd support in cinderlib without changes to oVirt.
>
> Cheers,
> Gorka.
>
> [1]:
> https://github.com/ceph/ceph-csi/blob/04644c1d5896b493d6aaf9ab66f2302cf67a2ee3/internal/rbd/rbd_attach.go#L35-L41
>
> > I'm also not sure about the future of librbd support in qemu. I know that
> > qemu folks also want to get rid of such code. For example libgfapi
> > (Glsuter native driver) is not maintained and likely to be removed soon.
> >
> > If this feature is important to you, please open RFE for this, and
> explain why
> > it is needed.
> >
> > We can consider it for future 4.4.z release.
> >
> > Adding some storage and qemu folks to get more info on this.
> >
> > Nir
> >
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SUXZT47HWHALTYOUF67ALJTMK653SNBO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3WKB4JRCCBG7RMS7YVKPR6MKASZYM5GG/


[ovirt-users] Re: Gluster Hyperconverged fails with single disk partitioned

2021-01-21 Thread Shantur Rathore
I have found a workaround to this.
gluster.infra ansible role can exclude and reset lvm filters when 
"gluster_infra_lvm" variable defined.
https://github.com/gluster/gluster-ansible-infra/blob/2522d3bd722be86139c57253a86336b2fec33964/roles/backend_setup/tasks/main.yml#L18

1. Go with gluster deployment wizard till the configuration display step just 
before deploy.
2. Click Edit and scroll down to "vars:" section.
3. Just under "vars:" section, add "gluster_infra_lvm: SOMETHING" and adjust 
spaces to match other variables.
4. Don't forget to click save on top before clicking Deploy.

This will reset the filter and set it back again with correct devices.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H5IDJVSMEPFH2DQ7BORCPALZDLL3LGQP/


[ovirt-users] Re: Gluster Hyperconverged fails with single disk partitioned

2021-01-21 Thread Shantur Rathore
Thanks Derek,

I don't think that is the case as per documentation

https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/



On Thu, Jan 21, 2021 at 12:17 AM Derek Atkins  wrote:

> Ovirt is expecting an LVM volume, not a raw partition.
>
> -derek
> Sent using my mobile device. Please excuse any typos.
>
> On January 20, 2021 7:13:45 PM Shantur Rathore 
> wrote:
>
>> Hi,
>>
>> I am trying to setup a single host Self-Hosted hyperconverged setup with
>> GlusterFS.
>> I have a custom partitioning where I provide 100G for oVirt and its
>> partitions and rest 800G to a physical partition (/dev/sda4).
>>
>> When I try to create gluster deployment with the wizard, it fails
>>
>> TASK [gluster.infra/roles/backend_setup : Create volume groups]
>> 
>> failed: [ovirt-macpro-16.lab.ced.bskyb.com] (item={'key':
>> 'gluster_vg_sda4', 'value': [{'vgname': 'gluster_vg_sda4', 'pvname':
>> '/dev/sda4'}]}) => {"ansible_loop_var": "item", "changed": false, "err": "
>>  Device /dev/sda4 excluded by a filter.\n", "item": {"key":
>> "gluster_vg_sda4", "value": [{"pvname": "/dev/sda4", "vgname":
>> "gluster_vg_sda4"}]}, "msg": "Creating physical volume '/dev/sda4' failed",
>> "rc": 5}
>>
>> I checked and /etc/lvm/lvm.conf filter doesn't allow /dev/sda4. It only
>> allows PV for onn VG.
>> Once I manually allow /dev/sda4 to lvm filter, it works fine and gluster
>> deployment completes.
>>
>> Fdisk :
>>
>> # fdisk -l /dev/sda
>> Disk /dev/sda: 931.9 GiB, 100081440 bytes, 1954210120 sectors
>> Units: sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 4096 bytes
>> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
>> Disklabel type: gpt
>> Disk identifier: FE209000-85B5-489A-8A86-4CF0C91B2E7D
>>
>> Device StartEndSectors   Size Type
>> /dev/sda1   204812308471228800   600M EFI System
>> /dev/sda2123084833279992097152 1G Linux filesystem
>> /dev/sda33328000  213043199  209715200   100G Linux LVM
>> /dev/sda4  213043200 1954209791 1741166592 830.3G Linux filesystem
>>
>> LVS
>>
>> # lvs
>>   LV VG  Attr   LSize  Pool  Origin
>> Data%  Meta%  Move Log Cpy%Sync Convert
>>   home   onn Vwi-aotz-- 10.00g pool0
>>  0.11
>>   ovirt-node-ng-4.4.4-0.20201221.0   onn Vwi---tz-k 10.00g pool0 root
>>   ovirt-node-ng-4.4.4-0.20201221.0+1 onn Vwi-aotz-- 10.00g pool0
>> ovirt-node-ng-4.4.4-0.20201221.0 25.26
>>   pool0  onn twi-aotz-- 95.89g
>>  2.95   14.39
>>   root   onn Vri---tz-k 10.00g pool0
>>   swap   onn -wi-ao  4.00g
>>   tmponn Vwi-aotz-- 10.00g pool0
>>  0.12
>>   varonn Vwi-aotz-- 20.00g pool0
>>  0.92
>>   var_crash  onn Vwi-aotz-- 10.00g pool0
>>  0.11
>>   var_logonn Vwi-aotz-- 10.00g pool0
>>  0.13
>>   var_log_audit  onn Vwi-aotz--  4.00g pool0
>>  0.27
>>
>>
>>
>> # grep filter /etc/lvm/lvm.conf
>> filter =
>> ["a|^/dev/disk/by-id/lvm-pv-uuid-QrvErF-eaS9-PxbI-wCBV-3OxJ-V600-NG7raZ$|",
>> "r|.*|"]
>>
>> Am I doing something which oVirt isn't expecting?
>> Is there anyway to provide tell gluster deployment to add it to lvm
>> config.
>>
>> Thanks,
>> Shantur
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BP7BQWG3O7IFRLU4W6ZNV4J6PHR4DUZF/
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GDLGYDWVWQAMLU7R2SIKJ44S3N3CLI4J/


[ovirt-users] Gluster Hyperconverged fails with single disk partitioned

2021-01-20 Thread Shantur Rathore
Hi,

I am trying to setup a single host Self-Hosted hyperconverged setup with
GlusterFS.
I have a custom partitioning where I provide 100G for oVirt and its
partitions and rest 800G to a physical partition (/dev/sda4).

When I try to create gluster deployment with the wizard, it fails

TASK [gluster.infra/roles/backend_setup : Create volume groups]

failed: [ovirt-macpro-16.lab.ced.bskyb.com] (item={'key':
'gluster_vg_sda4', 'value': [{'vgname': 'gluster_vg_sda4', 'pvname':
'/dev/sda4'}]}) => {"ansible_loop_var": "item", "changed": false, "err": "
 Device /dev/sda4 excluded by a filter.\n", "item": {"key":
"gluster_vg_sda4", "value": [{"pvname": "/dev/sda4", "vgname":
"gluster_vg_sda4"}]}, "msg": "Creating physical volume '/dev/sda4' failed",
"rc": 5}

I checked and /etc/lvm/lvm.conf filter doesn't allow /dev/sda4. It only
allows PV for onn VG.
Once I manually allow /dev/sda4 to lvm filter, it works fine and gluster
deployment completes.

Fdisk :

# fdisk -l /dev/sda
Disk /dev/sda: 931.9 GiB, 100081440 bytes, 1954210120 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: FE209000-85B5-489A-8A86-4CF0C91B2E7D

Device StartEndSectors   Size Type
/dev/sda1   204812308471228800   600M EFI System
/dev/sda2123084833279992097152 1G Linux filesystem
/dev/sda33328000  213043199  209715200   100G Linux LVM
/dev/sda4  213043200 1954209791 1741166592 830.3G Linux filesystem

LVS

# lvs
  LV VG  Attr   LSize  Pool  Origin
  Data%  Meta%  Move Log Cpy%Sync Convert
  home   onn Vwi-aotz-- 10.00g pool0
   0.11
  ovirt-node-ng-4.4.4-0.20201221.0   onn Vwi---tz-k 10.00g pool0 root
  ovirt-node-ng-4.4.4-0.20201221.0+1 onn Vwi-aotz-- 10.00g pool0
ovirt-node-ng-4.4.4-0.20201221.0 25.26
  pool0  onn twi-aotz-- 95.89g
   2.95   14.39
  root   onn Vri---tz-k 10.00g pool0
  swap   onn -wi-ao  4.00g
  tmponn Vwi-aotz-- 10.00g pool0
   0.12
  varonn Vwi-aotz-- 20.00g pool0
   0.92
  var_crash  onn Vwi-aotz-- 10.00g pool0
   0.11
  var_logonn Vwi-aotz-- 10.00g pool0
   0.13
  var_log_audit  onn Vwi-aotz--  4.00g pool0
   0.27



# grep filter /etc/lvm/lvm.conf
filter =
["a|^/dev/disk/by-id/lvm-pv-uuid-QrvErF-eaS9-PxbI-wCBV-3OxJ-V600-NG7raZ$|",
"r|.*|"]

Am I doing something which oVirt isn't expecting?
Is there anyway to provide tell gluster deployment to add it to lvm config.

Thanks,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BP7BQWG3O7IFRLU4W6ZNV4J6PHR4DUZF/


[ovirt-users] Mutually exclusive VMs

2021-01-20 Thread Shantur Rathore
Hi all,

I am trying to figure if there is a way to force oVirt to schedule VMs on
different hosts.
So if I am cloning 6 VMs from a template, I want oVirt to schedule them on
all different hosts.
No 2 VMs must schedule on the same host.
I know that there is pin-to-host functionality but I don't want to manually
pin it to hosts.

Thanks
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FZ7N7VRTI6BMHX53A3M3ST6U6P7DXFGA/


[ovirt-users] Re: Managed Block Storage and more

2021-01-20 Thread Shantur Rathore
So,
after a quick dive into source code, I cannot see any mention of posix
storage in hosted-engine code.
I am not sure if there is a manual way of moving the locally created
hosted-engine vm to POSIX storage and create a storage domain using API as
it does for other types of domains while installing self-hosted engine.

Regards,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MXFYKVBM2WDGOMDVPDV3C6PGLJQ74AV6/


[ovirt-users] Re: Managed Block Storage and more

2021-01-20 Thread Shantur Rathore
>
> It should work as ovirt sees it as a regular domain, cephFS will
> probably work too


Just tried to setup Ceph hyperconverged

1. Installed oVirt NG 4.4.4 on a machine ( partitioned to leave space for
Ceph )
2. Installed CephAdm : https://docs.ceph.com/en/latest/cephadm/install/
3. Enabled EPEL and other required repos.
4. Bootstrapped ceph cluster
5. Created LV on the partitioned free space
6. Added OSD to ceph cluster
7. Added CephFS
8. Set min_size and size to 1 for osd pools to make it work with 1 OSD.

All ready to deploy Self hosted engine from Cockpit

1. Started Self-Hosted engine deployment (not Hyperconverged)
2. Enter the details to Prepare-VM.
3. Prepare-VM successful.
4. Feeling excited, get the cephfs mount details ready.
5. Storage screen - There is no option to use POSIX storage for
Self-Hosted. Bummer.

Is there any way to work around this?
I am able to add this to another oVirt Engine.

[image: Screenshot 2021-01-20 at 12.19.55.png]

Thanks,
Shantur

On Tue, Jan 19, 2021 at 11:16 AM Benny Zlotnik  wrote:

> >Thanks for pointing out the requirement for Master domain. In theory,
> will I be able to satisfy the requirement with another iSCSI or >maybe Ceph
> iSCSI as master domain?
> It should work as ovirt sees it as a regular domain, cephFS will
> probably work too
>
> >So each node has
>
> >- oVirt Node NG / Centos
> >- Ceph cluster member
> >- iSCSI or Ceph iSCSI master domain
>
> >How practical is such a setup?
> Not sure, it could work, but it hasn't been tested and it's likely you
> are going to be the first to try it
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YNGXYAZ3S3KXPGHEFHDCVXSDL7QA3IAY/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Shantur Rathore
@Konstantin Shalygin  :
>
> I recommend to look to OpenStack or some OpenNebula/Proxmox if you wan’t
> use Ceph Storage.

I have tested all options but oVirt seems to tick most required boxes.

OpenStack : Too complex for use case
Proxmox : Love Ceph support but very basic clustering support
OpenNebula : Weird VM state machine.

Not sure if you know that rbd-nbd support is going to be implemented to
Cinderlib. I could understand why oVirt wants to support CinderLib and
deprecate Cinder support.

@Strahil Nikolov 

> Most probably it will be easier if you stick with full-blown distro.

Yesterday, I was able to bring up a single host single disk Ceph cluster on
oVirt Node NG 4.4.4 after enabling some repositories. Having said that, I
didn't try image based upgrades to host.
I read somewhere that rpms are persisted between host upgrades in Node NG
now.

@Benny Zlotnik

> Moreover, it is not possible to use a pure Managed Block Storage setup
> at all, there has to be at least one regular storage domain in a
> datacenter

Thanks for pointing out the requirement for Master domain. In theory, will
I be able to satisfy the requirement with another iSCSI or maybe Ceph iSCSI
as master domain?

So each node has

- oVirt Node NG / Centos
- Ceph cluster member
- iSCSI or Ceph iSCSI master domain

How practical is such a setup?

Thanks,
Shantur

On Tue, Jan 19, 2021 at 9:39 AM Konstantin Shalygin  wrote:

> Yep, BZ is
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1539837
> https://bugzilla.redhat.com/show_bug.cgi?id=1904669
> https://bugzilla.redhat.com/show_bug.cgi?id=1905113
>
> Thanks,
> k
>
> On 19 Jan 2021, at 11:05, Gianluca Cecchi 
> wrote:
>
> perhaps a copy paste error about the bugzilla entries? They are the same
> number...
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JNARS3TLZQH62EISYLYGN4STSKFCBX5F/


[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Shantur Rathore
Thanks for pointing that out to me Konstantin.

I understand that it would use a kernel client instead of userland rbd lib.
Isn't it better as I have seen kernel clients 20x faster than userland??

I am probably missing something important here, would you mind detailing
that.

Regards,
Shantur


On Mon, Jan 18, 2021 at 3:27 PM Konstantin Shalygin  wrote:

> Beware about Ceph and oVirt Managed Block Storage, current integration is
> only possible with kernel, not with qemu-rbd.
>
>
> k
>
> Sent from my iPhone
>
> On 18 Jan 2021, at 13:00, Shantur Rathore  wrote:
>
> 
> Thanks Strahil for your reply.
>
> Sorry just to confirm,
>
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph
> changes?
>
> Thanks,
> Shantur
>
> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users 
> wrote:
>
>> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>>
>> Hi Strahil,
>>
>> Thanks for your reply, I have 16 nodes for now but more on the way.
>>
>> The reason why Ceph appeals me over Gluster because of the following
>> reasons.
>>
>> 1. I have more experience with Ceph than Gluster.
>>
>> That is a good reason to pick CEPH.
>>
>> 2. I heard in Managed Block Storage presentation that it leverages
>> storage software to offload storage related tasks.
>> 3. Adding Gluster storage limits to 3 hosts at a time.
>>
>> Only if you wish the nodes to be both Storage and Compute. Yet, you can
>> add as many as you wish as a compute node (won't be part of Gluster) and
>> later you can add them to the Gluster TSP (this requires 3 nodes at a time).
>>
>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No
>> such limitation if I go via Ceph.
>>
>> Actually , it's about Red Hat support for RHHI and not for Gluster +
>> oVirt. As both oVirt and Gluster ,that are used, are upstream projects,
>> support is on best effort from the community.
>>
>> In my initial testing I was able to enable Centos repositories in Node Ng
>> but if I remember correctly, there were some librbd versions present in
>> Node Ng which clashed with the version I was trying to install.
>> Does Ceph hyperconverge still make sense?
>>
>> Yes it is. You got the knowledge to run the CEPH part, yet consider
>> talking with some of the devs on the list - as there were some changes
>> recently in oVirt's support for CEPH.
>>
>> Regards
>> Shantur
>>
>> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users 
>> wrote:
>>
>> Hi Shantur,
>>
>> the main question is how many nodes you have.
>> Ceph integration is still in development/experimental and it should be
>> wise to consider Gluster also. It has a great integration and it's quite
>> easy to work with).
>>
>>
>> There are users reporting using CEPH with their oVirt , but I can't tell
>> how good it is.
>> I doubt that oVirt nodes come with CEPH components , so you most probably
>> will need to use a full-blown distro. In general, using extra software on
>> oVirt nodes is quite hard .
>>
>> With such setup, you will need much more nodes than a Gluster setup due
>> to CEPH's requirements.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
>> shantur.rath...@gmail.com> написа:
>>
>>
>>
>>
>>
>> Hi all,
>>
>> I am planning my new oVirt cluster on Apple hosts. These hosts can only
>> have one disk which I plan to partition and use for hyper converged setup.
>> As this is my first oVirt cluster I need help in understanding few bits.
>>
>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
>> Centos?
>> 3. Can I install cinderlib on oVirt Node Next hosts?
>> 4. Are there any pit falls in such a setup?
>>
>>
>> Thanks for your help
>>
>> Regards,
>> Shantur
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Shantur Rathore
Thanks Strahil for your reply.

Sorry just to confirm,

1. Are you saying Ceph on oVirt Node NG isn't possible?
2. Would you know which devs would be best to ask about the recent Ceph
changes?

Thanks,
Shantur

On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users 
wrote:

> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>
> Hi Strahil,
>
> Thanks for your reply, I have 16 nodes for now but more on the way.
>
> The reason why Ceph appeals me over Gluster because of the following
> reasons.
>
> 1. I have more experience with Ceph than Gluster.
>
> That is a good reason to pick CEPH.
>
> 2. I heard in Managed Block Storage presentation that it leverages storage
> software to offload storage related tasks.
> 3. Adding Gluster storage limits to 3 hosts at a time.
>
> Only if you wish the nodes to be both Storage and Compute. Yet, you can
> add as many as you wish as a compute node (won't be part of Gluster) and
> later you can add them to the Gluster TSP (this requires 3 nodes at a time).
>
> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No
> such limitation if I go via Ceph.
>
> Actually , it's about Red Hat support for RHHI and not for Gluster +
> oVirt. As both oVirt and Gluster ,that are used, are upstream projects,
> support is on best effort from the community.
>
> In my initial testing I was able to enable Centos repositories in Node Ng
> but if I remember correctly, there were some librbd versions present in
> Node Ng which clashed with the version I was trying to install.
> Does Ceph hyperconverge still make sense?
>
> Yes it is. You got the knowledge to run the CEPH part, yet consider
> talking with some of the devs on the list - as there were some changes
> recently in oVirt's support for CEPH.
>
> Regards
> Shantur
>
> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users 
> wrote:
>
> Hi Shantur,
>
> the main question is how many nodes you have.
> Ceph integration is still in development/experimental and it should be
> wise to consider Gluster also. It has a great integration and it's quite
> easy to work with).
>
>
> There are users reporting using CEPH with their oVirt , but I can't tell
> how good it is.
> I doubt that oVirt nodes come with CEPH components , so you most probably
> will need to use a full-blown distro. In general, using extra software on
> oVirt nodes is quite hard .
>
> With such setup, you will need much more nodes than a Gluster setup due to
> CEPH's requirements.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
> shantur.rath...@gmail.com> написа:
>
>
>
>
>
> Hi all,
>
> I am planning my new oVirt cluster on Apple hosts. These hosts can only
> have one disk which I plan to partition and use for hyper converged setup.
> As this is my first oVirt cluster I need help in understanding few bits.
>
> 1. Is Hyper converged setup possible with Ceph using cinderlib?
> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
> Centos?
> 3. Can I install cinderlib on oVirt Node Next hosts?
> 4. Are there any pit falls in such a setup?
>
>
> Thanks for your help
>
> Regards,
> Shantur
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6WBVRC4GJTAIL3XYPJEEYGOBCCNZY4ZV/


[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Shantur Rathore
Hi Strahil,

Thanks for your reply, I have 16 nodes for now but more on the way.

The reason why Ceph appeals me over Gluster because of the following
reasons.

1. I have more experience with Ceph than Gluster.
2. I heard in Managed Block Storage presentation that it leverages storage
software to offload storage related tasks.
3. Adding Gluster storage limits to 3 hosts at a time.
4. I read that there is a limit of maximum 12 hosts in Gluster setup. No
such limitation if I go via Ceph.

In my initial testing I was able to enable Centos repositories in Node Ng
but if I remember correctly, there were some librbd versions present in
Node Ng which clashed with the version I was trying to install.

Does Ceph hyperconverge still make sense?

Regards
Shantur

On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users 
wrote:

> Hi Shantur,
>
> the main question is how many nodes you have.
> Ceph integration is still in development/experimental and it should be
> wise to consider Gluster also. It has a great integration and it's quite
> easy to work with).
>
>
> There are users reporting using CEPH with their oVirt , but I can't tell
> how good it is.
> I doubt that oVirt nodes come with CEPH components , so you most probably
> will need to use a full-blown distro. In general, using extra software on
> oVirt nodes is quite hard .
>
> With such setup, you will need much more nodes than a Gluster setup due to
> CEPH's requirements.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
> shantur.rath...@gmail.com> написа:
>
>
>
>
>
> Hi all,
>
> I am planning my new oVirt cluster on Apple hosts. These hosts can only
> have one disk which I plan to partition and use for hyper converged setup.
> As this is my first oVirt cluster I need help in understanding few bits.
>
> 1. Is Hyper converged setup possible with Ceph using cinderlib?
> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
> Centos?
> 3. Can I install cinderlib on oVirt Node Next hosts?
> 4. Are there any pit falls in such a setup?
>
>
> Thanks for your help
>
> Regards,
> Shantur
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQQR4PF32ALSD2HFOEW4KCC6HKFKZKLW/


[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Shantur Rathore
> Hi Strahil,
>
> Thanks for your reply, I have 16 nodes for now but more on the way.
>
> The reason why Ceph appeals me over Gluster because of the following
> reasons.
>
> 1. I have more experience with Ceph than Gluster.
> 2. I heard in Managed Block Storage presentation that it leverages storage
> software to offload storage related tasks.
> 3. Adding Gluster storage limits to 3 hosts at a time.
> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No
> such limitation if I go via Ceph.
>
> In my initial testing I was able to enable Centos repositories in Node Ng
> but if I remember correctly, there were some librbd versions present in
> Node Ng which clashed with the version I was trying to install.
>
> Does Ceph hyperconverge still make sense?
>
> Regards
> Shantur
>



On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users 
> wrote:
>
>> Hi Shantur,
>>
>> the main question is how many nodes you have.
>> Ceph integration is still in development/experimental and it should be
>> wise to consider Gluster also. It has a great integration and it's quite
>> easy to work with).
>>
>>
>> There are users reporting using CEPH with their oVirt , but I can't tell
>> how good it is.
>> I doubt that oVirt nodes come with CEPH components , so you most probably
>> will need to use a full-blown distro. In general, using extra software on
>> oVirt nodes is quite hard .
>>
>> With such setup, you will need much more nodes than a Gluster setup due
>> to CEPH's requirements.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
>> shantur.rath...@gmail.com> написа:
>>
>>
>>
>>
>>
>> Hi all,
>>
>> I am planning my new oVirt cluster on Apple hosts. These hosts can only
>> have one disk which I plan to partition and use for hyper converged setup.
>> As this is my first oVirt cluster I need help in understanding few bits.
>>
>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
>> Centos?
>> 3. Can I install cinderlib on oVirt Node Next hosts?
>> 4. Are there any pit falls in such a setup?
>>
>>
>> Thanks for your help
>>
>> Regards,
>> Shantur
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RQIUZPZHFAV3JXJM4DP3OYG6JYEK446Y/


[ovirt-users] Hyperconverged Ceph + Managed Block Storage

2021-01-17 Thread Shantur Rathore
Hi all,

I am planning my new oVirt cluster on Apple hosts. These hosts can only
have one disk which I plan to partition and use for hyper converged setup.
As this is my first oVirt cluster I need help in understanding a few bits.

1. Is Hyper converged setup possible with Ceph using cinderlib?
2. Can this hyper converged setup be on oVirt Node Next hosts or only
Centos?
3. Can I install cinderlib on oVirt Node Next hosts?
4. Are there any pitfalls in such a setup?


Thanks for your help

Regards,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HW5ZSEICMOPWOH2ES5WKG7HKNBLUK7JS/


[ovirt-users] New oVirt Cluster and Managed block storage

2021-01-17 Thread Shantur Rathore
Hi all,

I am planning my new oVirt cluster on Apple hosts. These hosts can only
have one disk which I plan to partition and use for hyper converged setup.
As this is my first oVirt cluster I need help in understanding a few bits.

1. Is Hyper converged setup possible with Ceph using cinderlib?
2. Can this hyper converged setup be on oVirt Node Next hosts or only
Centos?
3. Can I install cinderlib on oVirt Node Next hosts?
4. Are there any pitfalls in such a setup?


Thanks for your help

Regards,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCF77FP2QI2KV2LFTCOBRCGE3G26Y2AX/


[ovirt-users] Managed Block Storage and more

2021-01-17 Thread Shantur Rathore
Hi all,

I am planning my new oVirt cluster on Apple hosts. These hosts can only
have one disk which I plan to partition and use for hyper converged setup.
As this is my first oVirt cluster I need help in understanding few bits.

1. Is Hyper converged setup possible with Ceph using cinderlib?
2. Can this hyper converged setup be on oVirt Node Next hosts or only
Centos?
3. Can I install cinderlib on oVirt Node Next hosts?
4. Are there any pit falls in such a setup?


Thanks for your help

Regards,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/