[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Matthias Leopold



Am 22.01.21 um 12:01 schrieb Shantur Rathore:

Thanks Matthias,

Ceph iSCSI is indeed supported but it introduces an overhead for running 
LIO gateways for iSCSI.
CephFS works as a posix domain, if we could get a posix domain to work 
as a master domain then we could run a self-hosted engine on it.
Concerning this you should look at 
https://bugzilla.redhat.com/show_bug.cgi?id=1577529.


Ceph RBD ( rbd-nbd hopefully in future ) could be used with 
cinderlib and we have got a self-hosted infrastructure with Ceph.


I am hopeful that when cinderlib integration is mature enough to be out 
of Tech preview, there will be a way to migrate old cinder disks to new 
cinderlib.


PS: About your large deployment, go OpenStack or OpenNebula if you like. 
Proxmox clustering isn't very great, it doesn't have a single controller 
and uses coro-sync based clustering which isn't very great.


Cheers,
Shantur

On Fri, Jan 22, 2021 at 10:36 AM Matthias Leopold 
> wrote:


I can confirm that Ceph iSCSI can be used for master domain, we are
using it together with VM disks on Ceph via Cinder ("old style").
Recent
developments concerning Ceph in oVirt are disappointing for me, I think
I will have to look elsewhere (OpenStack, Proxmox) for our rather big
deployment. At least Nir Soffer's explanation for the move to cinderlib
in another thread (dated 20210121) shed some light on the background of
this decision.

Matthias

...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HCV4TNVUKWECSWDW2VNRLO465MOIQS4P/


[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Shantur Rathore
Thanks Matthias,

Ceph iSCSI is indeed supported but it introduces an overhead for running
LIO gateways for iSCSI.
CephFS works as a posix domain, if we could get a posix domain to work as a
master domain then we could run a self-hosted engine on it.
Ceph RBD ( rbd-nbd hopefully in future ) could be used with cinderlib and
we have got a self-hosted infrastructure with Ceph.

I am hopeful that when cinderlib integration is mature enough to be out of
Tech preview, there will be a way to migrate old cinder disks to new
cinderlib.

PS: About your large deployment, go OpenStack or OpenNebula if you like.
Proxmox clustering isn't very great, it doesn't have a single controller
and uses coro-sync based clustering which isn't very great.

Cheers,
Shantur

On Fri, Jan 22, 2021 at 10:36 AM Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> I can confirm that Ceph iSCSI can be used for master domain, we are
> using it together with VM disks on Ceph via Cinder ("old style"). Recent
> developments concerning Ceph in oVirt are disappointing for me, I think
> I will have to look elsewhere (OpenStack, Proxmox) for our rather big
> deployment. At least Nir Soffer's explanation for the move to cinderlib
> in another thread (dated 20210121) shed some light on the background of
> this decision.
>
> Matthias
>
> Am 19.01.21 um 12:57 schrieb Gianluca Cecchi:
> > On Tue, Jan 19, 2021 at 12:20 PM Benny Zlotnik  > > wrote:
> >
> >  >Thanks for pointing out the requirement for Master domain. In
> > theory, will I be able to satisfy the requirement with another iSCSI
> > or >maybe Ceph iSCSI as master domain?
> > It should work as ovirt sees it as a regular domain, cephFS will
> > probably work too
> >
> >
> > Ceph iSCSI gateway should be supported since 4.1, so I think I can use
> > it for configuring the master domain and still leveraging the same
> > overall storage environment provided by Ceph, correct?
> >
> > https://bugzilla.redhat.com/show_bug.cgi?id=1527061
> >
> > Gianluca
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ASTNEGXSV7I4NIOG5RVZKDWIPQCEPMU/
> >
>
> --
> Matthias Leopold
> IT Systems & Communications
> Medizinische Universität Wien
> Spitalgasse 23 / BT 88 / Ebene 00
> A-1090 Wien
> Tel: +43 1 40160-21241
> Fax: +43 1 40160-921200
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JNLF5QBPVMFNQXCD5J7RCPJKIZ4WOJ76/


[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Matthias Leopold
I can confirm that Ceph iSCSI can be used for master domain, we are 
using it together with VM disks on Ceph via Cinder ("old style"). Recent 
developments concerning Ceph in oVirt are disappointing for me, I think 
I will have to look elsewhere (OpenStack, Proxmox) for our rather big 
deployment. At least Nir Soffer's explanation for the move to cinderlib 
in another thread (dated 20210121) shed some light on the background of 
this decision.


Matthias

Am 19.01.21 um 12:57 schrieb Gianluca Cecchi:
On Tue, Jan 19, 2021 at 12:20 PM Benny Zlotnik > wrote:


 >Thanks for pointing out the requirement for Master domain. In
theory, will I be able to satisfy the requirement with another iSCSI
or >maybe Ceph iSCSI as master domain?
It should work as ovirt sees it as a regular domain, cephFS will
probably work too


Ceph iSCSI gateway should be supported since 4.1, so I think I can use 
it for configuring the master domain and still leveraging the same 
overall storage environment provided by Ceph, correct?


https://bugzilla.redhat.com/show_bug.cgi?id=1527061

Gianluca

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ASTNEGXSV7I4NIOG5RVZKDWIPQCEPMU/



--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 / Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HHM2IJ7OE5V6CI74RUEKGTZOSHR7EFQ3/


[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Shantur Rathore
Thanks Konstantin.

I do get that oVirt needs a master domain.
Just want to make a POSIX domain as a master domain. I can see there is no
option in UI for that but do not understand if it is incompatible or not
implemented.
If it is not implemented then there might be a possibility of creating one
with manual steps.

Thanks

On Fri, Jan 22, 2021 at 10:21 AM Konstantin Shalygin  wrote:

> Shantur, this is oVirt. You always should make master domain. It’s enough
> some 1GB NFS on manager side.
>
>
> k
>
> On 22 Jan 2021, at 12:02, Shantur Rathore  wrote:
>
> Just a bump. Any ideas anyone?
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2S5MPA3CWH6YTPAIWZE5GLCBIP7ZQLJ5/


[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Konstantin Shalygin
Shantur, this is oVirt. You always should make master domain. It’s enough some 
1GB NFS on manager side.


k

> On 22 Jan 2021, at 12:02, Shantur Rathore  wrote:
> 
> Just a bump. Any ideas anyone?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5BNHSC23IQJYFPQ6NOKIEXKCXGIPXJMC/


[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Shantur Rathore
Just a bump. Any ideas anyone?

On Wed, Jan 20, 2021 at 4:13 PM Shantur Rathore  wrote:

> So,
> after a quick dive into source code, I cannot see any mention of posix
> storage in hosted-engine code.
> I am not sure if there is a manual way of moving the locally created
> hosted-engine vm to POSIX storage and create a storage domain using API as
> it does for other types of domains while installing self-hosted engine.
>
> Regards,
> Shantur
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LZNM3TMKMJPFR22SKIUBN32EK4FS5GCH/


[ovirt-users] Re: Managed Block Storage and more

2021-01-20 Thread Shantur Rathore
So,
after a quick dive into source code, I cannot see any mention of posix
storage in hosted-engine code.
I am not sure if there is a manual way of moving the locally created
hosted-engine vm to POSIX storage and create a storage domain using API as
it does for other types of domains while installing self-hosted engine.

Regards,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MXFYKVBM2WDGOMDVPDV3C6PGLJQ74AV6/


[ovirt-users] Re: Managed Block Storage and more

2021-01-20 Thread Shantur Rathore
>
> It should work as ovirt sees it as a regular domain, cephFS will
> probably work too


Just tried to setup Ceph hyperconverged

1. Installed oVirt NG 4.4.4 on a machine ( partitioned to leave space for
Ceph )
2. Installed CephAdm : https://docs.ceph.com/en/latest/cephadm/install/
3. Enabled EPEL and other required repos.
4. Bootstrapped ceph cluster
5. Created LV on the partitioned free space
6. Added OSD to ceph cluster
7. Added CephFS
8. Set min_size and size to 1 for osd pools to make it work with 1 OSD.

All ready to deploy Self hosted engine from Cockpit

1. Started Self-Hosted engine deployment (not Hyperconverged)
2. Enter the details to Prepare-VM.
3. Prepare-VM successful.
4. Feeling excited, get the cephfs mount details ready.
5. Storage screen - There is no option to use POSIX storage for
Self-Hosted. Bummer.

Is there any way to work around this?
I am able to add this to another oVirt Engine.

[image: Screenshot 2021-01-20 at 12.19.55.png]

Thanks,
Shantur

On Tue, Jan 19, 2021 at 11:16 AM Benny Zlotnik  wrote:

> >Thanks for pointing out the requirement for Master domain. In theory,
> will I be able to satisfy the requirement with another iSCSI or >maybe Ceph
> iSCSI as master domain?
> It should work as ovirt sees it as a regular domain, cephFS will
> probably work too
>
> >So each node has
>
> >- oVirt Node NG / Centos
> >- Ceph cluster member
> >- iSCSI or Ceph iSCSI master domain
>
> >How practical is such a setup?
> Not sure, it could work, but it hasn't been tested and it's likely you
> are going to be the first to try it
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YNGXYAZ3S3KXPGHEFHDCVXSDL7QA3IAY/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Ceph iSCSI gateway should be supported since 4.1, so I think I can use it for 
>configuring the master domain and still leveraging the same overall storage 
>environment provided by Ceph, correct?

yes, it shouldn't be a problem
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LN6JWSEXX7TTQMWWPUHPFRPTPQQMPUP3/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Gianluca Cecchi
On Tue, Jan 19, 2021 at 12:20 PM Benny Zlotnik  wrote:

> >Thanks for pointing out the requirement for Master domain. In theory,
> will I be able to satisfy the requirement with another iSCSI or >maybe Ceph
> iSCSI as master domain?
> It should work as ovirt sees it as a regular domain, cephFS will
> probably work too
>

Ceph iSCSI gateway should be supported since 4.1, so I think I can use it
for configuring the master domain and still leveraging the same overall
storage environment provided by Ceph, correct?

https://bugzilla.redhat.com/show_bug.cgi?id=1527061

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ASTNEGXSV7I4NIOG5RVZKDWIPQCEPMU/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Thanks for pointing out the requirement for Master domain. In theory, will I 
>be able to satisfy the requirement with another iSCSI or >maybe Ceph iSCSI as 
>master domain?
It should work as ovirt sees it as a regular domain, cephFS will
probably work too

>So each node has

>- oVirt Node NG / Centos
>- Ceph cluster member
>- iSCSI or Ceph iSCSI master domain

>How practical is such a setup?
Not sure, it could work, but it hasn't been tested and it's likely you
are going to be the first to try it
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PH6K2B2QMTRZPCRNBHWIV4OZB7X3NLHE/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Sandro Bonazzola
Il giorno mar 19 gen 2021 alle ore 09:07 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:

> On Tue, Jan 19, 2021 at 8:43 AM Benny Zlotnik  wrote:
>
>> Ceph support is available via Managed Block Storage (tech preview), it
>> cannot be used instead of gluster for hyperconverged setups.
>>
>>
> Just for clarification: when you say Managed Block Storage you mean
> cinderlib integration, correct?
> Is still this one below the correct reference page for 4.4?
>
> https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
>
> So are the manual steps still needed (and also repo config that seems
> against pike)?
> Or do you have an updated link for configuring cinderlib in 4.4?
>

Above mentioned page was feature development page and not considered end
user documentation.
Updated documentation is here:
https://ovirt.org/documentation/installing_ovirt_as_a_standalone_manager_with_local_databases/#Set_up_Cinderlib




>
> Moreover, it is not possible to use a pure Managed Block Storage setup
>> at all, there has to be at least one regular storage domain in a
>> datacenter
>>
>>
> Is this true only for Self Hosted Engine Environment or also if I have an
> external engine?
>
> Thanks,
> Gianluca
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SHSQO6WLMTVDNTVFACLOEFOFOD3GRYLW/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MB4FAL34LAJJWVYR247R7T2T6IQE6VP3/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin


> On 19 Jan 2021, at 13:39, Shantur Rathore  wrote:
> 
> I have tested all options but oVirt seems to tick most required boxes.
> 
> OpenStack : Too complex for use case
> Proxmox : Love Ceph support but very basic clustering support
> OpenNebula : Weird VM state machine.
> 
> Not sure if you know that rbd-nbd support is going to be implemented to 
> Cinderlib. I could understand why oVirt wants to support CinderLib and 
> deprecate Cinder support.

Yes, we love oVirt for “that should work like this”, before oVirt 4.4...
Now imagine: you current cluster runned with qemu-rbd and Cinder, now you 
upgrade oVirt and can’t do anything - can’t migrate, your images in another 
oVirt pool, engine-setup can’t migrate current images to MBS - all in “feature 
preview”, older integration broken, then abandoned.


Thanks,
k___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XZGDUICDWAPGMVQM6V5K4IRZE46PJ3O6/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Shantur Rathore
@Konstantin Shalygin  :
>
> I recommend to look to OpenStack or some OpenNebula/Proxmox if you wan’t
> use Ceph Storage.

I have tested all options but oVirt seems to tick most required boxes.

OpenStack : Too complex for use case
Proxmox : Love Ceph support but very basic clustering support
OpenNebula : Weird VM state machine.

Not sure if you know that rbd-nbd support is going to be implemented to
Cinderlib. I could understand why oVirt wants to support CinderLib and
deprecate Cinder support.

@Strahil Nikolov 

> Most probably it will be easier if you stick with full-blown distro.

Yesterday, I was able to bring up a single host single disk Ceph cluster on
oVirt Node NG 4.4.4 after enabling some repositories. Having said that, I
didn't try image based upgrades to host.
I read somewhere that rpms are persisted between host upgrades in Node NG
now.

@Benny Zlotnik

> Moreover, it is not possible to use a pure Managed Block Storage setup
> at all, there has to be at least one regular storage domain in a
> datacenter

Thanks for pointing out the requirement for Master domain. In theory, will
I be able to satisfy the requirement with another iSCSI or maybe Ceph iSCSI
as master domain?

So each node has

- oVirt Node NG / Centos
- Ceph cluster member
- iSCSI or Ceph iSCSI master domain

How practical is such a setup?

Thanks,
Shantur

On Tue, Jan 19, 2021 at 9:39 AM Konstantin Shalygin  wrote:

> Yep, BZ is
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1539837
> https://bugzilla.redhat.com/show_bug.cgi?id=1904669
> https://bugzilla.redhat.com/show_bug.cgi?id=1905113
>
> Thanks,
> k
>
> On 19 Jan 2021, at 11:05, Gianluca Cecchi 
> wrote:
>
> perhaps a copy paste error about the bugzilla entries? They are the same
> number...
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JNARS3TLZQH62EISYLYGN4STSKFCBX5F/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin
Yep, BZ is 

https://bugzilla.redhat.com/show_bug.cgi?id=1539837 

https://bugzilla.redhat.com/show_bug.cgi?id=1904669 

https://bugzilla.redhat.com/show_bug.cgi?id=1905113 


Thanks,
k

> On 19 Jan 2021, at 11:05, Gianluca Cecchi  wrote:
> 
> perhaps a copy paste error about the bugzilla entries? They are the same 
> number...

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QCYCKFFM2LSZSZZIQX4Q5GEOYDO2I5GU/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Just for clarification: when you say Managed Block Storage you mean cinderlib 
>integration, >correct?
>Is still this one below the correct reference page for 4.4?
>https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
yes

>So are the manual steps still needed (and also repo config that seems against 
>pike)?
>Or do you have an updated link for configuring cinderlib in 4.4?
It is slightly outdated, I, and other users have successfully used
ussuri. I will update the feature page today.

>Is this true only for Self Hosted Engine Environment or also if I have an 
>external engine?
External engine as well. The reason this is required is that only
regular domains can serve as master domains which is required for a
host to get the SPM role
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JUV5F6GKRNFOCXB2BPW2ZY4UUZZ25DTV/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Gianluca Cecchi
On Tue, Jan 19, 2021 at 9:01 AM Konstantin Shalygin  wrote:

> Shantur, I recommend to look to OpenStack or some OpenNebula/Proxmox if
> you wan’t use Ceph Storage.
> Current storage team support in oVirt just can break something and do not
> work with this anymore, take a look what I talking about: in [1], [2], [3]
>
>
> k
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1899453
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1899453
> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1899453
>
>
>
>
perhaps a copy paste error about the bugzilla entries? They are the same
number...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XYMG4QUM3TTTL45XGXUWA6DOWIWDQ64/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Gianluca Cecchi
On Tue, Jan 19, 2021 at 8:43 AM Benny Zlotnik  wrote:

> Ceph support is available via Managed Block Storage (tech preview), it
> cannot be used instead of gluster for hyperconverged setups.
>
>
Just for clarification: when you say Managed Block Storage you mean
cinderlib integration, correct?
Is still this one below the correct reference page for 4.4?
https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html

So are the manual steps still needed (and also repo config that seems
against pike)?
Or do you have an updated link for configuring cinderlib in 4.4?

Moreover, it is not possible to use a pure Managed Block Storage setup
> at all, there has to be at least one regular storage domain in a
> datacenter
>
>
Is this true only for Self Hosted Engine Environment or also if I have an
external engine?

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SHSQO6WLMTVDNTVFACLOEFOFOD3GRYLW/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin
Shantur, I recommend to look to OpenStack or some OpenNebula/Proxmox if you 
wan’t use Ceph Storage.
Current storage team support in oVirt just can break something and do not work 
with this anymore, take a look what I talking about: in [1], [2], [3]


k

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 

[2] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 

[3] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 




> On 19 Jan 2021, at 10:40, Benny Zlotnik  wrote:
> 
> Ceph support is available via Managed Block Storage (tech preview), it
> cannot be used instead of gluster for hyperconverged setups.
> 
> Moreover, it is not possible to use a pure Managed Block Storage setup
> at all, there has to be at least one regular storage domain in a
> datacenter

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NQG6XHDYZT7WGCHDIUCY55IS7F5G5OVC/


[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Benny Zlotnik
Ceph support is available via Managed Block Storage (tech preview), it
cannot be used instead of gluster for hyperconverged setups.

Moreover, it is not possible to use a pure Managed Block Storage setup
at all, there has to be at least one regular storage domain in a
datacenter

On Mon, Jan 18, 2021 at 11:58 AM Shantur Rathore  wrote:
>
> Thanks Strahil for your reply.
>
> Sorry just to confirm,
>
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph 
> changes?
>
> Thanks,
> Shantur
>
> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users  
> wrote:
>>
>> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>>
>> Hi Strahil,
>>
>> Thanks for your reply, I have 16 nodes for now but more on the way.
>>
>> The reason why Ceph appeals me over Gluster because of the following reasons.
>>
>> 1. I have more experience with Ceph than Gluster.
>>
>> That is a good reason to pick CEPH.
>>
>> 2. I heard in Managed Block Storage presentation that it leverages storage 
>> software to offload storage related tasks.
>> 3. Adding Gluster storage limits to 3 hosts at a time.
>>
>> Only if you wish the nodes to be both Storage and Compute. Yet, you can add 
>> as many as you wish as a compute node (won't be part of Gluster) and later 
>> you can add them to the Gluster TSP (this requires 3 nodes at a time).
>>
>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No 
>> such limitation if I go via Ceph.
>>
>> Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. 
>> As both oVirt and Gluster ,that are used, are upstream projects, support is 
>> on best effort from the community.
>>
>> In my initial testing I was able to enable Centos repositories in Node Ng 
>> but if I remember correctly, there were some librbd versions present in Node 
>> Ng which clashed with the version I was trying to install.
>> Does Ceph hyperconverge still make sense?
>>
>> Yes it is. You got the knowledge to run the CEPH part, yet consider talking 
>> with some of the devs on the list - as there were some changes recently in 
>> oVirt's support for CEPH.
>>
>> Regards
>> Shantur
>>
>> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users  
>> wrote:
>>
>> Hi Shantur,
>>
>> the main question is how many nodes you have.
>> Ceph integration is still in development/experimental and it should be wise 
>> to consider Gluster also. It has a great integration and it's quite easy to 
>> work with).
>>
>>
>> There are users reporting using CEPH with their oVirt , but I can't tell how 
>> good it is.
>> I doubt that oVirt nodes come with CEPH components , so you most probably 
>> will need to use a full-blown distro. In general, using extra software on 
>> oVirt nodes is quite hard .
>>
>> With such setup, you will need much more nodes than a Gluster setup due to 
>> CEPH's requirements.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
>>  написа:
>>
>>
>>
>>
>>
>> Hi all,
>>
>> I am planning my new oVirt cluster on Apple hosts. These hosts can only have 
>> one disk which I plan to partition and use for hyper converged setup. As 
>> this is my first oVirt cluster I need help in understanding few bits.
>>
>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos?
>> 3. Can I install cinderlib on oVirt Node Next hosts?
>> 4. Are there any pit falls in such a setup?
>>
>>
>> Thanks for your help
>>
>> Regards,
>> Shantur
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
>
> ___
> Users mailing list -- users@ovirt.org
> To 

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Sandro Bonazzola
Il giorno lun 18 gen 2021 alle ore 20:04 Strahil Nikolov <
hunter86...@yahoo.com> ha scritto:

> Most probably it will be easier if you stick with full-blown distro.
>
> @Sandro Bonazzola can help with CEPH status.
>

Letting the storage team have a voice here :-)
+Tal Nisan  , +Eyal Shenitzky  , +Nir
Soffer 


>
> Best Regards,Strahil Nikolov
>
>
>
>
>
>
> В понеделник, 18 януари 2021 г., 11:44:32 Гринуич+2, Shantur Rathore <
> rathor...@gmail.com> написа:
>
>
>
>
>
> Thanks Strahil for your reply.
>
> Sorry just to confirm,
>
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph
> changes?
>
> Thanks,
> Shantur
>
> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users 
> wrote:
> > В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
> >> Hi Strahil,
> >>
> >> Thanks for your reply, I have 16 nodes for now but more on the way.
> >>
> >> The reason why Ceph appeals me over Gluster because of the following
> reasons.
> >>
> >> 1. I have more experience with Ceph than Gluster.
> > That is a good reason to pick CEPH.
> >> 2. I heard in Managed Block Storage presentation that it leverages
> storage software to offload storage related tasks.
> >> 3. Adding Gluster storage limits to 3 hosts at a time.
> > Only if you wish the nodes to be both Storage and Compute. Yet, you can
> add as many as you wish as a compute node (won't be part of Gluster) and
> later you can add them to the Gluster TSP (this requires 3 nodes at a time).
> >> 4. I read that there is a limit of maximum 12 hosts in Gluster setup.
> No such limitation if I go via Ceph.
> > Actually , it's about Red Hat support for RHHI and not for Gluster +
> oVirt. As both oVirt and Gluster ,that are used, are upstream projects,
> support is on best effort from the community.
> >> In my initial testing I was able to enable Centos repositories in Node
> Ng but if I remember correctly, there were some librbd versions present in
> Node Ng which clashed with the version I was trying to install.
> >> Does Ceph hyperconverge still make sense?
> > Yes it is. You got the knowledge to run the CEPH part, yet consider
> talking with some of the devs on the list - as there were some changes
> recently in oVirt's support for CEPH.
> >
> >> Regards
> >> Shantur
> >>
> >> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <
> users@ovirt.org> wrote:
> >>> Hi Shantur,
> >>>
> >>> the main question is how many nodes you have.
> >>> Ceph integration is still in development/experimental and it should be
> wise to consider Gluster also. It has a great integration and it's quite
> easy to work with).
> >>>
> >>>
> >>> There are users reporting using CEPH with their oVirt , but I can't
> tell how good it is.
> >>> I doubt that oVirt nodes come with CEPH components , so you most
> probably will need to use a full-blown distro. In general, using extra
> software on oVirt nodes is quite hard .
> >>>
> >>> With such setup, you will need much more nodes than a Gluster setup
> due to CEPH's requirements.
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
> shantur.rath...@gmail.com> написа:
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> Hi all,
> >>>
> >>> I am planning my new oVirt cluster on Apple hosts. These hosts can
> only have one disk which I plan to partition and use for hyper converged
> setup. As this is my first oVirt cluster I need help in understanding few
> bits.
> >>>
> >>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
> >>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
> Centos?
> >>> 3. Can I install cinderlib on oVirt Node Next hosts?
> >>> 4. Are there any pit falls in such a setup?
> >>>
> >>>
> >>> Thanks for your help
> >>>
> >>> Regards,
> >>> Shantur
> >>>
> >>> ___
> >>> Users mailing list -- users@ovirt.org
> >>> To unsubscribe send an email to users-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
> >>> ___
> >>> Users mailing list -- users@ovirt.org
> >>> To unsubscribe send an email to users-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
> >>>
> >>
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> 

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Konstantin Shalygin
Faster than fuse-rbd, not qemu.
Main issue is kernel pagecache and client upgrades, for example cluster with 
700 osd and 1000 clients we need update client version for new features. With 
current oVirt realization we need update kernel then reboot host. With librbd 
we just need update package and activate host.


k

Sent from my iPhone

> On 18 Jan 2021, at 19:13, Shantur Rathore  wrote:
> 
> Thanks for pointing that out to me Konstantin.
> 
> I understand that it would use a kernel client instead of userland rbd lib.
> Isn't it better as I have seen kernel clients 20x faster than userland??
> 
> I am probably missing something important here, would you mind detailing that.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TL32D27O5GDQZHMUX57IV5FUYFPKWAKZ/


[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Strahil Nikolov via Users
Most probably it will be easier if you stick with full-blown distro.

@Sandro Bonazzola can help with CEPH status.

Best Regards,Strahil Nikolov






В понеделник, 18 януари 2021 г., 11:44:32 Гринуич+2, Shantur Rathore 
 написа: 





Thanks Strahil for your reply.

Sorry just to confirm,

1. Are you saying Ceph on oVirt Node NG isn't possible?
2. Would you know which devs would be best to ask about the recent Ceph changes?

Thanks,
Shantur

On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users  
wrote:
> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>> Hi Strahil,
>> 
>> Thanks for your reply, I have 16 nodes for now but more on the way.
>> 
>> The reason why Ceph appeals me over Gluster because of the following reasons.
>> 
>> 1. I have more experience with Ceph than Gluster.
> That is a good reason to pick CEPH.
>> 2. I heard in Managed Block Storage presentation that it leverages storage 
>> software to offload storage related tasks. 
>> 3. Adding Gluster storage limits to 3 hosts at a time.
> Only if you wish the nodes to be both Storage and Compute. Yet, you can add 
> as many as you wish as a compute node (won't be part of Gluster) and later 
> you can add them to the Gluster TSP (this requires 3 nodes at a time).
>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No 
>> such limitation if I go via Ceph.
> Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. 
> As both oVirt and Gluster ,that are used, are upstream projects, support is 
> on best effort from the community.
>> In my initial testing I was able to enable Centos repositories in Node Ng 
>> but if I remember correctly, there were some librbd versions present in Node 
>> Ng which clashed with the version I was trying to install.
>> Does Ceph hyperconverge still make sense?
> Yes it is. You got the knowledge to run the CEPH part, yet consider talking 
> with some of the devs on the list - as there were some changes recently in 
> oVirt's support for CEPH.
> 
>> Regards
>> Shantur
>> 
>> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users  
>> wrote:
>>> Hi Shantur,
>>> 
>>> the main question is how many nodes you have.
>>> Ceph integration is still in development/experimental and it should be wise 
>>> to consider Gluster also. It has a great integration and it's quite easy to 
>>> work with).
>>> 
>>> 
>>> There are users reporting using CEPH with their oVirt , but I can't tell 
>>> how good it is.
>>> I doubt that oVirt nodes come with CEPH components , so you most probably 
>>> will need to use a full-blown distro. In general, using extra software on 
>>> oVirt nodes is quite hard .
>>> 
>>> With such setup, you will need much more nodes than a Gluster setup due to 
>>> CEPH's requirements.
>>> 
>>> Best Regards,
>>> Strahil Nikolov
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
>>>  написа: 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Hi all,
>>> 
>>> I am planning my new oVirt cluster on Apple hosts. These hosts can only 
>>> have one disk which I plan to partition and use for hyper converged setup. 
>>> As this is my first oVirt cluster I need help in understanding few bits.
>>> 
>>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only 
>>> Centos?
>>> 3. Can I install cinderlib on oVirt Node Next hosts?
>>> 4. Are there any pit falls in such a setup?
>>> 
>>> 
>>> Thanks for your help
>>> 
>>> Regards,
>>> Shantur
>>> 
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>>> 
>> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Shantur Rathore
Thanks for pointing that out to me Konstantin.

I understand that it would use a kernel client instead of userland rbd lib.
Isn't it better as I have seen kernel clients 20x faster than userland??

I am probably missing something important here, would you mind detailing
that.

Regards,
Shantur


On Mon, Jan 18, 2021 at 3:27 PM Konstantin Shalygin  wrote:

> Beware about Ceph and oVirt Managed Block Storage, current integration is
> only possible with kernel, not with qemu-rbd.
>
>
> k
>
> Sent from my iPhone
>
> On 18 Jan 2021, at 13:00, Shantur Rathore  wrote:
>
> 
> Thanks Strahil for your reply.
>
> Sorry just to confirm,
>
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph
> changes?
>
> Thanks,
> Shantur
>
> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users 
> wrote:
>
>> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>>
>> Hi Strahil,
>>
>> Thanks for your reply, I have 16 nodes for now but more on the way.
>>
>> The reason why Ceph appeals me over Gluster because of the following
>> reasons.
>>
>> 1. I have more experience with Ceph than Gluster.
>>
>> That is a good reason to pick CEPH.
>>
>> 2. I heard in Managed Block Storage presentation that it leverages
>> storage software to offload storage related tasks.
>> 3. Adding Gluster storage limits to 3 hosts at a time.
>>
>> Only if you wish the nodes to be both Storage and Compute. Yet, you can
>> add as many as you wish as a compute node (won't be part of Gluster) and
>> later you can add them to the Gluster TSP (this requires 3 nodes at a time).
>>
>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No
>> such limitation if I go via Ceph.
>>
>> Actually , it's about Red Hat support for RHHI and not for Gluster +
>> oVirt. As both oVirt and Gluster ,that are used, are upstream projects,
>> support is on best effort from the community.
>>
>> In my initial testing I was able to enable Centos repositories in Node Ng
>> but if I remember correctly, there were some librbd versions present in
>> Node Ng which clashed with the version I was trying to install.
>> Does Ceph hyperconverge still make sense?
>>
>> Yes it is. You got the knowledge to run the CEPH part, yet consider
>> talking with some of the devs on the list - as there were some changes
>> recently in oVirt's support for CEPH.
>>
>> Regards
>> Shantur
>>
>> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users 
>> wrote:
>>
>> Hi Shantur,
>>
>> the main question is how many nodes you have.
>> Ceph integration is still in development/experimental and it should be
>> wise to consider Gluster also. It has a great integration and it's quite
>> easy to work with).
>>
>>
>> There are users reporting using CEPH with their oVirt , but I can't tell
>> how good it is.
>> I doubt that oVirt nodes come with CEPH components , so you most probably
>> will need to use a full-blown distro. In general, using extra software on
>> oVirt nodes is quite hard .
>>
>> With such setup, you will need much more nodes than a Gluster setup due
>> to CEPH's requirements.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
>> shantur.rath...@gmail.com> написа:
>>
>>
>>
>>
>>
>> Hi all,
>>
>> I am planning my new oVirt cluster on Apple hosts. These hosts can only
>> have one disk which I plan to partition and use for hyper converged setup.
>> As this is my first oVirt cluster I need help in understanding few bits.
>>
>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
>> Centos?
>> 3. Can I install cinderlib on oVirt Node Next hosts?
>> 4. Are there any pit falls in such a setup?
>>
>>
>> Thanks for your help
>>
>> Regards,
>> Shantur
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> 

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Konstantin Shalygin
Beware about Ceph and oVirt Managed Block Storage, current integration is only 
possible with kernel, not with qemu-rbd.


k

Sent from my iPhone

> On 18 Jan 2021, at 13:00, Shantur Rathore  wrote:
> 
> 
> Thanks Strahil for your reply.
> 
> Sorry just to confirm,
> 
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph 
> changes?
> 
> Thanks,
> Shantur
> 
>> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users  
>> wrote:
>> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>>> Hi Strahil,
>>> 
>>> Thanks for your reply, I have 16 nodes for now but more on the way.
>>> 
>>> The reason why Ceph appeals me over Gluster because of the following 
>>> reasons.
>>> 
>>> 1. I have more experience with Ceph than Gluster.
>> That is a good reason to pick CEPH.
>>> 2. I heard in Managed Block Storage presentation that it leverages storage 
>>> software to offload storage related tasks. 
>>> 3. Adding Gluster storage limits to 3 hosts at a time.
>> Only if you wish the nodes to be both Storage and Compute. Yet, you can add 
>> as many as you wish as a compute node (won't be part of Gluster) and later 
>> you can add them to the Gluster TSP (this requires 3 nodes at a time).
>>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No 
>>> such limitation if I go via Ceph.
>> Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. 
>> As both oVirt and Gluster ,that are used, are upstream projects, support is 
>> on best effort from the community.
>>> In my initial testing I was able to enable Centos repositories in Node Ng 
>>> but if I remember correctly, there were some librbd versions present in 
>>> Node Ng which clashed with the version I was trying to install.
>>> Does Ceph hyperconverge still make sense?
>> Yes it is. You got the knowledge to run the CEPH part, yet consider talking 
>> with some of the devs on the list - as there were some changes recently in 
>> oVirt's support for CEPH.
>> 
>>> Regards
>>> Shantur
>>> 
 On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users  
 wrote:
 Hi Shantur,
 
 the main question is how many nodes you have.
 Ceph integration is still in development/experimental and it should be 
 wise to consider Gluster also. It has a great integration and it's quite 
 easy to work with).
 
 
 There are users reporting using CEPH with their oVirt , but I can't tell 
 how good it is.
 I doubt that oVirt nodes come with CEPH components , so you most probably 
 will need to use a full-blown distro. In general, using extra software on 
 oVirt nodes is quite hard .
 
 With such setup, you will need much more nodes than a Gluster setup due to 
 CEPH's requirements.
 
 Best Regards,
 Strahil Nikolov
 
 
 
 
 
 
 В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
  написа: 
 
 
 
 
 
 Hi all,
 
 I am planning my new oVirt cluster on Apple hosts. These hosts can only 
 have one disk which I plan to partition and use for hyper converged setup. 
 As this is my first oVirt cluster I need help in understanding few bits.
 
 1. Is Hyper converged setup possible with Ceph using cinderlib?
 2. Can this hyper converged setup be on oVirt Node Next hosts or only 
 Centos?
 3. Can I install cinderlib on oVirt Node Next hosts?
 4. Are there any pit falls in such a setup?
 
 
 Thanks for your help
 
 Regards,
 Shantur
 
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives: 
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives: 
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
> ___
> Users mailing list 

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Shantur Rathore
Thanks Strahil for your reply.

Sorry just to confirm,

1. Are you saying Ceph on oVirt Node NG isn't possible?
2. Would you know which devs would be best to ask about the recent Ceph
changes?

Thanks,
Shantur

On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users 
wrote:

> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>
> Hi Strahil,
>
> Thanks for your reply, I have 16 nodes for now but more on the way.
>
> The reason why Ceph appeals me over Gluster because of the following
> reasons.
>
> 1. I have more experience with Ceph than Gluster.
>
> That is a good reason to pick CEPH.
>
> 2. I heard in Managed Block Storage presentation that it leverages storage
> software to offload storage related tasks.
> 3. Adding Gluster storage limits to 3 hosts at a time.
>
> Only if you wish the nodes to be both Storage and Compute. Yet, you can
> add as many as you wish as a compute node (won't be part of Gluster) and
> later you can add them to the Gluster TSP (this requires 3 nodes at a time).
>
> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No
> such limitation if I go via Ceph.
>
> Actually , it's about Red Hat support for RHHI and not for Gluster +
> oVirt. As both oVirt and Gluster ,that are used, are upstream projects,
> support is on best effort from the community.
>
> In my initial testing I was able to enable Centos repositories in Node Ng
> but if I remember correctly, there were some librbd versions present in
> Node Ng which clashed with the version I was trying to install.
> Does Ceph hyperconverge still make sense?
>
> Yes it is. You got the knowledge to run the CEPH part, yet consider
> talking with some of the devs on the list - as there were some changes
> recently in oVirt's support for CEPH.
>
> Regards
> Shantur
>
> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users 
> wrote:
>
> Hi Shantur,
>
> the main question is how many nodes you have.
> Ceph integration is still in development/experimental and it should be
> wise to consider Gluster also. It has a great integration and it's quite
> easy to work with).
>
>
> There are users reporting using CEPH with their oVirt , but I can't tell
> how good it is.
> I doubt that oVirt nodes come with CEPH components , so you most probably
> will need to use a full-blown distro. In general, using extra software on
> oVirt nodes is quite hard .
>
> With such setup, you will need much more nodes than a Gluster setup due to
> CEPH's requirements.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
> shantur.rath...@gmail.com> написа:
>
>
>
>
>
> Hi all,
>
> I am planning my new oVirt cluster on Apple hosts. These hosts can only
> have one disk which I plan to partition and use for hyper converged setup.
> As this is my first oVirt cluster I need help in understanding few bits.
>
> 1. Is Hyper converged setup possible with Ceph using cinderlib?
> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
> Centos?
> 3. Can I install cinderlib on oVirt Node Next hosts?
> 4. Are there any pit falls in such a setup?
>
>
> Thanks for your help
>
> Regards,
> Shantur
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6WBVRC4GJTAIL3XYPJEEYGOBCCNZY4ZV/


[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Shantur Rathore
Hi Strahil,

Thanks for your reply, I have 16 nodes for now but more on the way.

The reason why Ceph appeals me over Gluster because of the following
reasons.

1. I have more experience with Ceph than Gluster.
2. I heard in Managed Block Storage presentation that it leverages storage
software to offload storage related tasks.
3. Adding Gluster storage limits to 3 hosts at a time.
4. I read that there is a limit of maximum 12 hosts in Gluster setup. No
such limitation if I go via Ceph.

In my initial testing I was able to enable Centos repositories in Node Ng
but if I remember correctly, there were some librbd versions present in
Node Ng which clashed with the version I was trying to install.

Does Ceph hyperconverge still make sense?

Regards
Shantur

On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users 
wrote:

> Hi Shantur,
>
> the main question is how many nodes you have.
> Ceph integration is still in development/experimental and it should be
> wise to consider Gluster also. It has a great integration and it's quite
> easy to work with).
>
>
> There are users reporting using CEPH with their oVirt , but I can't tell
> how good it is.
> I doubt that oVirt nodes come with CEPH components , so you most probably
> will need to use a full-blown distro. In general, using extra software on
> oVirt nodes is quite hard .
>
> With such setup, you will need much more nodes than a Gluster setup due to
> CEPH's requirements.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
> shantur.rath...@gmail.com> написа:
>
>
>
>
>
> Hi all,
>
> I am planning my new oVirt cluster on Apple hosts. These hosts can only
> have one disk which I plan to partition and use for hyper converged setup.
> As this is my first oVirt cluster I need help in understanding few bits.
>
> 1. Is Hyper converged setup possible with Ceph using cinderlib?
> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
> Centos?
> 3. Can I install cinderlib on oVirt Node Next hosts?
> 4. Are there any pit falls in such a setup?
>
>
> Thanks for your help
>
> Regards,
> Shantur
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQQR4PF32ALSD2HFOEW4KCC6HKFKZKLW/


[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Strahil Nikolov via Users
В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
> Hi Strahil,
> Thanks for your reply, I have 16 nodes for now but more on the way.
> 
> The reason why Ceph appeals me over Gluster because of the following
> reasons.
> 
> 1. I have more experience with Ceph than Gluster.
That is a good reason to pick CEPH.
> 2. I heard in Managed Block Storage presentation that it leverages
> storage software to offload storage related tasks. 
> 3. Adding Gluster storage limits to 3 hosts at a time.
Only if you wish the nodes to be both Storage and Compute. Yet, you can
add as many as you wish as a compute node (won't be part of Gluster)
and later you can add them to the Gluster TSP (this requires 3 nodes at
a time).
> 4. I read that there is a limit of maximum 12 hosts in Gluster setup.
> No such limitation if I go via Ceph.
Actually , it's about Red Hat support for RHHI and not for Gluster +
oVirt.  As  both oVirt and Gluster ,that are used, are upstream
projects, support is on best effort from the community.
> In my initial testing I was able to enable Centos repositories in
> Node Ng but if I remember correctly, there were some librbd versions
> present in Node Ng which clashed with the version I was trying to
> install.
> Does Ceph hyperconverge still make sense?
Yes it is. You got the knowledge to run the CEPH part, yet consider
talking with some of the devs on the list - as there were some changes
recently in oVirt's support for CEPH.
> Regards
> Shantur
> 
> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <
> users@ovirt.org> wrote:
> > Hi Shantur,
> > 
> > 
> > 
> > the main question is how many nodes you have.
> > 
> > Ceph integration is still in development/experimental and it should
> > be wise to consider Gluster also. It has a great integration and
> > it's quite easy to work with).
> > 
> > 
> > 
> > 
> > 
> > There are users reporting using CEPH with their oVirt , but I can't
> > tell how good it is.
> > 
> > I doubt that oVirt nodes come with CEPH components , so you most
> > probably will need to use a full-blown distro. In general, using
> > extra software on oVirt nodes is quite hard .
> > 
> > 
> > 
> > With such setup, you will need much more nodes than a Gluster setup
> > due to CEPH's requirements.
> > 
> > 
> > 
> > Best Regards,
> > 
> > Strahil Nikolov
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
> > shantur.rath...@gmail.com> написа: 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > Hi all,
> > 
> > 
> > 
> > I am planning my new oVirt cluster on Apple hosts. These hosts can
> > only have one disk which I plan to partition and use for hyper
> > converged setup. As this is my first oVirt cluster I need help in
> > understanding few bits.
> > 
> > 
> > 
> > 1. Is Hyper converged setup possible with Ceph using cinderlib?
> > 
> > 2. Can this hyper converged setup be on oVirt Node Next hosts or
> > only Centos?
> > 
> > 3. Can I install cinderlib on oVirt Node Next hosts?
> > 
> > 4. Are there any pit falls in such a setup?
> > 
> > 
> > 
> > 
> > 
> > Thanks for your help
> > 
> > 
> > 
> > Regards,
> > 
> > Shantur
> > 
> > 
> > 
> > ___
> > 
> > Users mailing list -- users@ovirt.org
> > 
> > To unsubscribe send an email to users-le...@ovirt.org
> > 
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > 
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > 
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
> > 
> > ___
> > 
> > Users mailing list -- users@ovirt.org
> > 
> > To unsubscribe send an email to users-le...@ovirt.org
> > 
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > 
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > 
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
> > 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/


[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Shantur Rathore
> Hi Strahil,
>
> Thanks for your reply, I have 16 nodes for now but more on the way.
>
> The reason why Ceph appeals me over Gluster because of the following
> reasons.
>
> 1. I have more experience with Ceph than Gluster.
> 2. I heard in Managed Block Storage presentation that it leverages storage
> software to offload storage related tasks.
> 3. Adding Gluster storage limits to 3 hosts at a time.
> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No
> such limitation if I go via Ceph.
>
> In my initial testing I was able to enable Centos repositories in Node Ng
> but if I remember correctly, there were some librbd versions present in
> Node Ng which clashed with the version I was trying to install.
>
> Does Ceph hyperconverge still make sense?
>
> Regards
> Shantur
>



On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users 
> wrote:
>
>> Hi Shantur,
>>
>> the main question is how many nodes you have.
>> Ceph integration is still in development/experimental and it should be
>> wise to consider Gluster also. It has a great integration and it's quite
>> easy to work with).
>>
>>
>> There are users reporting using CEPH with their oVirt , but I can't tell
>> how good it is.
>> I doubt that oVirt nodes come with CEPH components , so you most probably
>> will need to use a full-blown distro. In general, using extra software on
>> oVirt nodes is quite hard .
>>
>> With such setup, you will need much more nodes than a Gluster setup due
>> to CEPH's requirements.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
>> shantur.rath...@gmail.com> написа:
>>
>>
>>
>>
>>
>> Hi all,
>>
>> I am planning my new oVirt cluster on Apple hosts. These hosts can only
>> have one disk which I plan to partition and use for hyper converged setup.
>> As this is my first oVirt cluster I need help in understanding few bits.
>>
>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
>> Centos?
>> 3. Can I install cinderlib on oVirt Node Next hosts?
>> 4. Are there any pit falls in such a setup?
>>
>>
>> Thanks for your help
>>
>> Regards,
>> Shantur
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RQIUZPZHFAV3JXJM4DP3OYG6JYEK446Y/


[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Strahil Nikolov via Users
Hi Shantur,

the main question is how many nodes you have.
Ceph integration is still in development/experimental and it should be wise to 
consider Gluster also. It has a great integration and it's quite easy to work 
with).


There are users reporting using CEPH with their oVirt , but I can't tell how 
good it is.
I doubt that oVirt nodes come with CEPH components , so you most probably will 
need to use a full-blown distro. In general, using extra software on oVirt 
nodes is quite hard .

With such setup, you will need much more nodes than a Gluster setup due to 
CEPH's requirements.

Best Regards,
Strahil Nikolov






В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
 написа: 





Hi all,

I am planning my new oVirt cluster on Apple hosts. These hosts can only have 
one disk which I plan to partition and use for hyper converged setup. As this 
is my first oVirt cluster I need help in understanding few bits.

1. Is Hyper converged setup possible with Ceph using cinderlib?
2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos?
3. Can I install cinderlib on oVirt Node Next hosts?
4. Are there any pit falls in such a setup?


Thanks for your help

Regards,
Shantur

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/