I am not so sure if Ceph is a solution which can/should be suggested as first 
choice in general. NFS has its own problems, but also has advantages. One is 
that its easy, reliable and most of the time performance is more than 
sufficient. To get things started, NFS is not a bad choice.

Am 17.11.25, 15:25 schrieb "Alex Mattioli" <[email protected] 
<mailto:[email protected]>>:


+1 on that, if I were to build a cloud from the ground up I'd 100% use CEPH


-----Original Message-----
From: Wido den Hollander <[email protected] <mailto:[email protected]>LID> 
Sent: 17 November 2025 14:45
To: [email protected] <mailto:[email protected]>; João 
Jandre Paraquetti <[email protected] <mailto:[email protected]>>
Subject: Re: Which technology to choose for shared primary storage






Op 14-11-2025 om 19:40 schreef João Jandre Paraquetti:
> Hello, Paketix
> 
> I would say that the easiest to deal with and most compatible protocol 
> is NFS.
> 
> The next paragraphs assume we are talking about using these storage 
> types with KVM as the hypervisor.
> 
> The "problem" with FC and iSCSI, when using KVM, is that you'll need 
> an extra layer on top of them (a clustered FS, such as OCFS2) in order 
> to add them as shared mountpoint to ACS. This adds a bit of complexity 
> and I personally am not very fond of the current open source clustered 
> FS options available. Regardless, most (if not all) features that are 
> added to NFS are also added to shared mountpoint primary storages as 
> they are basically the same from ACS's point of view.
> 
> There is an option for adding FiberChannel primary storage directly, 
> however, as far as I know, it only works for a specific storage 
> vendor, and I doubt there are as many features for it as the other common 
> options.
> 
> I know that there exists the CLVM option when adding a storage, but I 
> have never tested it and never seen it being discussed in the community.
> 
> I haven't seen many new features released that target RBD (Ceph) for a 
> while, but you may create a CephFS with Ceph, which could be added to 
> your environment as a shared mount point as well.
> 


RBD/Ceph is very stable inside CloudStack. Not much work is needed there as the 
current code has been working for a long time.


Comparing NFS to Ceph is Apples vs Oranges. NFS is just a protocol where Ceph 
is a complete distributed storage environment with scability and failover 
build-in to the whole system.


Ceph is more than capable to be used as primary storage underneath CloudStack, 
but it's simply not NFS just like Ceph is not NFS.


Wido


> Best regards,
> 
> João
> 
> On 11/14/25 15:04, Paketix wrote:
>> I am new to CloudStack and could need some help/advise regarding 
>> which technology to use to implement my shared storage (primary storage).
>> Having some FibreChannel stuff in the lab this could work well.
>> ... but I am concerned that this is not the direction CloudStack is 
>> developing into.
>> So:
>> - NFS
>> - iSCSI
>> - Ceph
>> ... would be the choices regarding the docs.
>> Not sure if iSCSI would fit for shared storage as I do not see it in 
>> the list of protocols supported for primary storage in GUI.
>> What is the most future-proof solution to choose for primary storage?
>> I want to stay on the main-path CloudStack is going to, so I can use 
>> new features coming out in the next months and not being blocked by 
>> 'sorry, not supported for your protocol'
>>





Reply via email to