[ovirt-users] Re: Managed Block Storage and Templates

2021-09-28 Thread Shantur Rathore
Possibly due to https://bugzilla.redhat.com/show_bug.cgi?id=2008533 On Fri, Sep 24, 2021 at 10:35 AM Shantur Rathore wrote: > > I tried with external Ceph with cinderlib and Synology iSCSI with cinderlib > both as Managed block storage > > On Fri, 24 Sep 2021, 09:51 Gianluca Cecchi, wrote: >> >

[ovirt-users] Re: Managed Block Storage issues

2021-09-28 Thread Shantur Rathore
For 2nd Created : https://bugzilla.redhat.com/show_bug.cgi?id=2008533 I still need to test the rule On Wed, Sep 22, 2021 at 11:59 AM Benny Zlotnik wrote: > > I see the rule is created in the logs: > > MainProcess|jsonrpc/5::DEBUG::2021-09-22 > 10:39:37,504::supervdsm_server::95::SuperVdsm.Server

[ovirt-users] Re: Managed Block Storage and Templates

2021-09-24 Thread Shantur Rathore
I tried with external Ceph with cinderlib and Synology iSCSI with cinderlib both as Managed block storage On Fri, 24 Sep 2021, 09:51 Gianluca Cecchi, wrote: > On Wed, Sep 22, 2021 at 2:30 PM Shantur Rathore > wrote: > >> Hi all, >> >> Anyone tried using Templates with Managed Block Storage? >>

[ovirt-users] Re: Managed Block Storage and Templates

2021-09-24 Thread Gianluca Cecchi
On Wed, Sep 22, 2021 at 2:30 PM Shantur Rathore wrote: > Hi all, > > Anyone tried using Templates with Managed Block Storage? > I created a VM on MBS and then took a snapshot. > This worked but as soon as I created a Template from snapshot, the > template got created but there is no disk attached

[ovirt-users] Re: Managed Block Storage and Templates

2021-09-24 Thread Benny Zlotnik
Can you submit a bug for this? On Wed, Sep 22, 2021 at 3:31 PM Shantur Rathore wrote: > > Hi all, > > Anyone tried using Templates with Managed Block Storage? > I created a VM on MBS and then took a snapshot. > This worked but as soon as I created a Template from snapshot, the > template got crea

[ovirt-users] Re: Managed Block Storage issues

2021-09-22 Thread Benny Zlotnik
I see the rule is created in the logs: MainProcess|jsonrpc/5::DEBUG::2021-09-22 10:39:37,504::supervdsm_server::95::SuperVdsm.ServerCallback::(wrapper) call add_managed_udev_rule with ('ed1a0e9f-4d30-4896-b965-534861cc0c02', '/dev/mapper/360014054b727813d1bc4d4cefdade7db') {} MainProcess|jsonrpc/5

[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Matthias Leopold
Am 22.01.21 um 12:01 schrieb Shantur Rathore: Thanks Matthias, Ceph iSCSI is indeed supported but it introduces an overhead for running LIO gateways for iSCSI. CephFS works as a posix domain, if we could get a posix domain to work as a master domain then we could run a self-hosted engine on

[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Shantur Rathore
Thanks Matthias, Ceph iSCSI is indeed supported but it introduces an overhead for running LIO gateways for iSCSI. CephFS works as a posix domain, if we could get a posix domain to work as a master domain then we could run a self-hosted engine on it. Ceph RBD ( rbd-nbd hopefully in future ) could b

[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Matthias Leopold
I can confirm that Ceph iSCSI can be used for master domain, we are using it together with VM disks on Ceph via Cinder ("old style"). Recent developments concerning Ceph in oVirt are disappointing for me, I think I will have to look elsewhere (OpenStack, Proxmox) for our rather big deployment.

[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Shantur Rathore
Thanks Konstantin. I do get that oVirt needs a master domain. Just want to make a POSIX domain as a master domain. I can see there is no option in UI for that but do not understand if it is incompatible or not implemented. If it is not implemented then there might be a possibility of creating one

[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Konstantin Shalygin
Shantur, this is oVirt. You always should make master domain. It’s enough some 1GB NFS on manager side. k > On 22 Jan 2021, at 12:02, Shantur Rathore wrote: > > Just a bump. Any ideas anyone? ___ Users mailing list -- users@ovirt.org To unsubscribe

[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Shantur Rathore
Just a bump. Any ideas anyone? On Wed, Jan 20, 2021 at 4:13 PM Shantur Rathore wrote: > So, > after a quick dive into source code, I cannot see any mention of posix > storage in hosted-engine code. > I am not sure if there is a manual way of moving the locally created > hosted-engine vm to POSIX

[ovirt-users] Re: Managed Block Storage and more

2021-01-20 Thread Shantur Rathore
So, after a quick dive into source code, I cannot see any mention of posix storage in hosted-engine code. I am not sure if there is a manual way of moving the locally created hosted-engine vm to POSIX storage and create a storage domain using API as it does for other types of domains while installi

[ovirt-users] Re: Managed Block Storage and more

2021-01-20 Thread Shantur Rathore
> > It should work as ovirt sees it as a regular domain, cephFS will > probably work too Just tried to setup Ceph hyperconverged 1. Installed oVirt NG 4.4.4 on a machine ( partitioned to leave space for Ceph ) 2. Installed CephAdm : https://docs.ceph.com/en/latest/cephadm/install/ 3. Enabled EPE

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Ceph iSCSI gateway should be supported since 4.1, so I think I can use it for >configuring the master domain and still leveraging the same overall storage >environment provided by Ceph, correct? yes, it shouldn't be a problem ___ Users mailing list --

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Gianluca Cecchi
On Tue, Jan 19, 2021 at 12:20 PM Benny Zlotnik wrote: > >Thanks for pointing out the requirement for Master domain. In theory, > will I be able to satisfy the requirement with another iSCSI or >maybe Ceph > iSCSI as master domain? > It should work as ovirt sees it as a regular domain, cephFS will

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Thanks for pointing out the requirement for Master domain. In theory, will I >be able to satisfy the requirement with another iSCSI or >maybe Ceph iSCSI as >master domain? It should work as ovirt sees it as a regular domain, cephFS will probably work too >So each node has >- oVirt Node NG / Ce

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Sandro Bonazzola
Il giorno mar 19 gen 2021 alle ore 09:07 Gianluca Cecchi < gianluca.cec...@gmail.com> ha scritto: > On Tue, Jan 19, 2021 at 8:43 AM Benny Zlotnik wrote: > >> Ceph support is available via Managed Block Storage (tech preview), it >> cannot be used instead of gluster for hyperconverged setups. >> >

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin
> On 19 Jan 2021, at 13:39, Shantur Rathore wrote: > > I have tested all options but oVirt seems to tick most required boxes. > > OpenStack : Too complex for use case > Proxmox : Love Ceph support but very basic clustering support > OpenNebula : Weird VM state machine. > > Not sure if you kno

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Shantur Rathore
@Konstantin Shalygin : > > I recommend to look to OpenStack or some OpenNebula/Proxmox if you wan’t > use Ceph Storage. I have tested all options but oVirt seems to tick most required boxes. OpenStack : Too complex for use case Proxmox : Love Ceph support but very basic clustering support OpenNe

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin
Yep, BZ is https://bugzilla.redhat.com/show_bug.cgi?id=1539837 https://bugzilla.redhat.com/show_bug.cgi?id=1904669 https://bugzilla.redhat.com/show_bug.cgi?id=1905113

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Just for clarification: when you say Managed Block Storage you mean cinderlib >integration, >correct? >Is still this one below the correct reference page for 4.4? >https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html yes >So are the manual steps still need

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Gianluca Cecchi
On Tue, Jan 19, 2021 at 9:01 AM Konstantin Shalygin wrote: > Shantur, I recommend to look to OpenStack or some OpenNebula/Proxmox if > you wan’t use Ceph Storage. > Current storage team support in oVirt just can break something and do not > work with this anymore, take a look what I talking about

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Gianluca Cecchi
On Tue, Jan 19, 2021 at 8:43 AM Benny Zlotnik wrote: > Ceph support is available via Managed Block Storage (tech preview), it > cannot be used instead of gluster for hyperconverged setups. > > Just for clarification: when you say Managed Block Storage you mean cinderlib integration, correct? Is s

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin
Shantur, I recommend to look to OpenStack or some OpenNebula/Proxmox if you wan’t use Ceph Storage. Current storage team support in oVirt just can break something and do not work with this anymore, take a look what I talking about: in [1], [2], [3] k [1] https://bugzilla.redhat.com/show_bug.cg

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Benny Zlotnik
Ceph support is available via Managed Block Storage (tech preview), it cannot be used instead of gluster for hyperconverged setups. Moreover, it is not possible to use a pure Managed Block Storage setup at all, there has to be at least one regular storage domain in a datacenter On Mon, Jan 18, 20

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Sandro Bonazzola
Il giorno lun 18 gen 2021 alle ore 20:04 Strahil Nikolov < hunter86...@yahoo.com> ha scritto: > Most probably it will be easier if you stick with full-blown distro. > > @Sandro Bonazzola can help with CEPH status. > Letting the storage team have a voice here :-) +Tal Nisan , +Eyal Shenitzky , +

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Konstantin Shalygin
Faster than fuse-rbd, not qemu. Main issue is kernel pagecache and client upgrades, for example cluster with 700 osd and 1000 clients we need update client version for new features. With current oVirt realization we need update kernel then reboot host. With librbd we just need update package and

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Strahil Nikolov via Users
Most probably it will be easier if you stick with full-blown distro. @Sandro Bonazzola can help with CEPH status. Best Regards,Strahil Nikolov В понеделник, 18 януари 2021 г., 11:44:32 Гринуич+2, Shantur Rathore написа: Thanks Strahil for your reply. Sorry just to confirm, 1. Are

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Shantur Rathore
Thanks for pointing that out to me Konstantin. I understand that it would use a kernel client instead of userland rbd lib. Isn't it better as I have seen kernel clients 20x faster than userland?? I am probably missing something important here, would you mind detailing that. Regards, Shantur On

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Konstantin Shalygin
Beware about Ceph and oVirt Managed Block Storage, current integration is only possible with kernel, not with qemu-rbd. k Sent from my iPhone > On 18 Jan 2021, at 13:00, Shantur Rathore wrote: > >  > Thanks Strahil for your reply. > > Sorry just to confirm, > > 1. Are you saying Ceph on o

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Shantur Rathore
Thanks Strahil for your reply. Sorry just to confirm, 1. Are you saying Ceph on oVirt Node NG isn't possible? 2. Would you know which devs would be best to ask about the recent Ceph changes? Thanks, Shantur On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users wrote: > В 15:51 + на 17

[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Shantur Rathore
Hi Strahil, Thanks for your reply, I have 16 nodes for now but more on the way. The reason why Ceph appeals me over Gluster because of the following reasons. 1. I have more experience with Ceph than Gluster. 2. I heard in Managed Block Storage presentation that it leverages storage software to o

[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Strahil Nikolov via Users
В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа: > Hi Strahil, > Thanks for your reply, I have 16 nodes for now but more on the way. > > The reason why Ceph appeals me over Gluster because of the following > reasons. > > 1. I have more experience with Ceph than Gluster. That is a good re

[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Shantur Rathore
> Hi Strahil, > > Thanks for your reply, I have 16 nodes for now but more on the way. > > The reason why Ceph appeals me over Gluster because of the following > reasons. > > 1. I have more experience with Ceph than Gluster. > 2. I heard in Managed Block Storage presentation that it leverages storag

[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Strahil Nikolov via Users
Hi Shantur, the main question is how many nodes you have. Ceph integration is still in development/experimental and it should be wise to consider Gluster also. It has a great integration and it's quite easy to work with). There are users reporting using CEPH with their oVirt , but I can't tell

[ovirt-users] Re: Managed Block Storage/Ceph: Experiences from Catastrophic Hardware failure

2019-10-07 Thread Benny Zlotnik
We support it as part of the cinderlib integration (Managed Block Storage), each rbd device is represented as single ovirt disk when used. The integration is still in tech preview and still has a long way to go, but any early feedback is highly appreciated On Mon, Oct 7, 2019 at 2:20 PM Strahil

[ovirt-users] Re: Managed Block Storage/Ceph: Experiences from Catastrophic Hardware failure

2019-10-07 Thread Strahil
Hi Dan, As CEPH support is quite new, we need DEV clarification. Hi Sandro, Who can help to clarify if Ovirt supports direct RBD LUNs presented on the VMs? Are there any limitations in the current solution ? Best Regards, Strahil NikolovOn Oct 7, 2019 13:54, Dan Poltawski wrote: > > On Mon,

[ovirt-users] Re: Managed Block Storage/Ceph: Experiences from Catastrophic Hardware failure

2019-10-06 Thread Strahil Nikolov
On September 12, 2019 5:55:47 PM GMT+03:00, Dan Poltawski wrote: >Yesterday we had a catastrophic hardware failure with one of our nodes >using ceph and the experimental cinderlib integration. > >Unfortunately the ovirt cluster recover the situation well and took >some manual intervention to reso

[ovirt-users] Re: Managed Block Storage: ceph detach_volume failing after migration

2019-09-25 Thread Nir Soffer
On Wed, Sep 25, 2019 at 8:02 PM Dan Poltawski wrote: > Hi, > > On Wed, 2019-09-25 at 15:42 +0300, Amit Bawer wrote: > > According to resolution of [1] it's a multipathd/udev configuration > > issue. Could be worth to track this issue. > > > > [1] https://tracker.ceph.com/issues/12763 > > Thanks,

[ovirt-users] Re: Managed Block Storage: ceph detach_volume failing after migration

2019-09-25 Thread Dan Poltawski
Hi, On Wed, 2019-09-25 at 15:42 +0300, Amit Bawer wrote: > According to resolution of [1] it's a multipathd/udev configuration > issue. Could be worth to track this issue. > > [1] https://tracker.ceph.com/issues/12763 Thanks, that certainly looks like a smoking gun to me, in the logs: Sep 25 12:

[ovirt-users] Re: Managed Block Storage: ceph detach_volume failing after migration

2019-09-25 Thread Amit Bawer
According to resolution of [1] it's a multipathd/udev configuration issue. Could be worth to track this issue. [1] https://tracker.ceph.com/issues/12763 On Wed, Sep 25, 2019 at 3:18 PM Dan Poltawski wrote: > On ovirt 4.3.5 we are seeing various problems related to the rbd device > staying mappe

[ovirt-users] Re: Managed Block Storage: ceph detach_volume failing after migration

2019-09-25 Thread Benny Zlotnik
This might be a bug, can you share the full vdsm and engine logs? On Wed, Sep 25, 2019 at 3:18 PM Dan Poltawski wrote: > > On ovirt 4.3.5 we are seeing various problems related to the rbd device > staying mapped after a guest has been live migrated. This causes problems > migrating the guest b

[ovirt-users] Re: Managed Block Storage/Ceph: Experiences from Catastrophic Hardware failure

2019-09-15 Thread Benny Zlotnik
>* Would ovirt have been able to deal with clearing the rbd locks, or did I miss a trick somewhere to resolve this situation with manually going through each device and clering the lock? Unfortunately there is no trick on ovirt's side >* Might it be possible for ovirt to detect when the rbd image

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Dan Poltawski
On Tue, 2019-07-09 at 11:12 +0300, Benny Zlotnik wrote: > VM live migration is supported and should work > Can you add engine and cinderlib logs? Sorry - looks like once again this was a misconfig by me on the ceph side.. Is it possible to migrate existing vms to managed block storage? Also is i

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Benny Zlotnik
No problem :) >Is it possible to migrate existing vms to managed block storage? We do not have OVF support or stuff like that for MBS domains, you can attach MBS disks to existing VMs Or do you mean moving/copying existing disks to an MBS domain? in this case the answer is unfortunately no >Also i

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Dan Poltawski
On Mon, 2019-07-08 at 18:53 +0300, Benny Zlotnik wrote: > Can you try to create mutliple ceph volumes manually via rbd from the > engine machine, so we can simulate what cinderlib does without using > it, this can be done > $ rbd -c ceph.conf create /vol1 --size 100M > $ rbd -c ceph.conf create /vo

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Benny Zlotnik
VM live migration is supported and should work Can you add engine and cinderlib logs? On Tue, Jul 9, 2019 at 11:01 AM Dan Poltawski wrote: > On Tue, 2019-07-09 at 08:00 +0100, Dan Poltawski wrote: > > I've now managed to succesfully create/mount/delete volumes! > > However, I'm seeing live migra

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Dan Poltawski
On Tue, 2019-07-09 at 08:00 +0100, Dan Poltawski wrote: > I've now managed to succesfully create/mount/delete volumes! However, I'm seeing live migrations stay stuck. Is this supported? (gdb) py-list 345client.conf_set('rados_osd_op_timeout', timeout) 346

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Dan Poltawski
Hi, On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote: > Any chance you can setup gdb[1] so we can find out where it's stuck > exactly? Yes, abolutely - but I will need some assistance in getting GDB configured in the engine as I am not very familar with it - or how to enable the correct r

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Dan Poltawski
On Mon, 2019-07-08 at 16:49 +0300, Benny Zlotnik wrote: > Not too useful unfortunately :\ > Can you try py-list instead of py-bt? Perhaps it will provide better > results (gdb) py-list 57if get_errno(ex) != errno.EEXIST: 58raise 59return listen

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Dan Poltawski
On Mon, 2019-07-08 at 16:25 +0300, Benny Zlotnik wrote: > Hi, > > You have a typo, it's py-bt and I just tried it myself, I only had to > install: > $ yum install -y python-devel > (in addition to the packages specified in the link) Thanks - this is what I get: #3 Frame 0x7f2046b59ad0, for file /

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Dan Poltawski
Hi, On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote: > Any chance you can setup gdb[1] so we can find out where it's stuck > exactly? Yes, abolutely - but I will need some assistance in getting GDB configured in the engine as I am not very familar with it - or how to enable the correct rep

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Benny Zlotnik
Can you try to create mutliple ceph volumes manually via rbd from the engine machine, so we can simulate what cinderlib does without using it, this can be done $ rbd -c ceph.conf create /vol1 --size 100M $ rbd -c ceph.conf create /vol2 --size 100M On Mon, Jul 8, 2019 at 4:58 PM Dan Poltawski wrot

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Benny Zlotnik
Not too useful unfortunately :\ Can you try py-list instead of py-bt? Perhaps it will provide better results On Mon, Jul 8, 2019 at 4:41 PM Dan Poltawski wrote: > On Mon, 2019-07-08 at 16:25 +0300, Benny Zlotnik wrote: > > Hi, > > > > You have a typo, it's py-bt and I just tried it myself, I onl

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Benny Zlotnik
Hi, You have a typo, it's py-bt and I just tried it myself, I only had to install: $ yum install -y python-devel (in addition to the packages specified in the link) On Mon, Jul 8, 2019 at 2:40 PM Dan Poltawski wrote: > Hi, > > On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote: > > > Any ch

[ovirt-users] Re: Managed Block Storage

2019-07-06 Thread Benny Zlotnik
Hi, Any chance you can setup gdb[1] so we can find out where it's stuck exactly? Also, which version of ovirt are you using? Can you also check the ceph logs for anything suspicious? [1] - https://wiki.python.org/moin/DebuggingWithGdb $ gdb python then `py-bt` On Thu, Jul 4, 2019 at 7:00 PM w

[ovirt-users] Re: Managed Block Storage

2019-07-04 Thread dan . poltawski
> Can you provide logs? mainly engine.log and cinderlib.log > (/var/log/ovirt-engine/cinderlib/cinderlib.log If I create two volumes, the first one succeeds successfully, the second one hangs. If I look in the processlist after creating the second volume which doesn't succceed, I see the python

[ovirt-users] Re: Managed Block Storage

2019-07-04 Thread Benny Zlotnik
On Thu, Jul 4, 2019 at 1:03 PM wrote: > I'm testing out the managed storage to connect to ceph and I have a few > questions: * Would I be correct in assuming that the hosted engine VM needs > connectivity to the storage and not just the underlying hosts themselves? > It seems like the cinderlib