FYI: I recently found that ceph performs quite well in vmware, when using the 
VM as a "direct" ceph client.
(Our tests involved directly mounting an RBD device from the VM)

So as long as you can treat the OS level as throwaway, and use ceph-to-the-vm 
for your actually important data, this might be also a good thing to use in 
ovirt.


----- Original Message -----
From: "Eyal Shenitzky" <eshen...@redhat.com>
To: "Sandro Bonazzola" <sbona...@redhat.com>, "Benny Zlotnik" 
<bzlot...@redhat.com>, matt...@peregrineit.net
Cc: "users" <users@ovirt.org>
Sent: Wednesday, April 21, 2021 12:30:39 AM
Subject: [ovirt-users] Re: n00b Requesting Some Advice

Hi Matthew, 

Currently, in order to use Ceph in oVirt you have 2 options: 

1. Ceph using ISCSI gateway - regular ISCSI storage domain with Ceph as the 
storage backend, supports all the regular operations [1]. 
2. Using the new Managed Block Storage (Cinderlib integration) 
technical-preview - Create a Managed Block Storage domain that doesn't support 
all the operations that we have for the "regular" storage domain you can find 
more info here [2] and [3]. 

Each option has its benefits. 

[1] - [ 
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/block_device_guide/using_an_iscsi_gateway
 | 
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/block_device_guide/using_an_iscsi_gateway
 ] 
[2] - [ 
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/set_up_cinderlib
 | 
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/set_up_cinderlib
 ] 
[3] - [ 
https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
 | 
https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
 ] 

On Wed, 21 Apr 2021 at 10:12, Sandro Bonazzola < [ mailto:sbona...@redhat.com | 
sbona...@redhat.com ] > wrote: 



[ mailto:eshen...@redhat.com | +Eyal Shenitzky ] any suggestion? 

Il giorno lun 12 apr 2021 alle ore 11:15 < [ mailto:matt...@peregrineit.net | 
matt...@peregrineit.net ] > ha scritto: 


Hi All, 

I need some "best practice" advice. We have a Ceph Storage Cluster (Octopus 
moving to Pacific) which we'd like to use with our new oVirt Cluster (all on 
CentOS 8 boxes). What I'd like to know is what is the "best" (ie recommended / 
best practice) way of doing this - via iSCSI, CFS, 'raw' RBD blocks, some other 
way I haven't read about yet, etc? 

I realise 'best' is a subjective term, but what I tend to do is do 'manual' 
installs so that I both actually understand what is happening (ie how things 
fit together - I pull apart and rebuild mechanical clocks and watches for the 
same reason) and also so I can '"Puppet-ise" the results for future use. This 
means that I am *not* necessarily looking for "quick and dirty" or "quick and 
easy" (ie, I have no trouble using the CLI and 'vim-ing' conf files as 
required) but I do want a solid, "best-practice" system when I'm done. 

So, can some please help? And also, would you mind pointing me towards the 
relevant documentation for the answer(s) supplied (yes, I *always* RTFM :-) ). 

Thanks in advance 

Dulux-Oz 
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P22PFBU7SGYHHFTH7E24NZQRBISM5QEX/

Reply via email to