Hi.

When you create a compute offering you can specify storage tags to tell ACS
which primary storage is it allowed to allocate VMs using this offering. If
no storage tag is specified ACS will choose the next available primary
storage during VM deployment based on deployment planner you select when
creating compute offering.

Someone who knows better, let me know if I'm wrong.

Regards.

On Wed, Nov 19, 2025 at 12:24 PM Paketix <[email protected]> wrote:

> Sure - I am totally aware of this fact.
> Maybe I could have been move precise ...
> My question was not targeting HA features of NFS.
> I am just curious on how ACS is handling an additional (second)
> primary/secondary NFS storage server.
> Will ACS automatically 'spill over' and use the second NFS server as
> soon as the first one is full or do I have to configure the storage to
> be used in the 'compute offering' settings?
>
> On 19.11.25 11:53, Wido den Hollander wrote:
> >
> >
> > Op 17-11-2025 om 19:07 schreef Paketix:
> >> Thanks for all your feedbacks!
> >> I will go for NFS in a first version to keep complexity low.
> >> Maybe I will have multiple NFS servers after a while for both primary
> >> and secondary storage ...
> >
> > Do understand that NFS itself says nothing about redundancy. If you
> > just use a single server which provides and NFS-export it "works", but
> > there is no redundancy.
> >
> > Either you need to use a storage appliance like TrueNAS Enterprise,
> > NetApp, etc to provide HA NFS or you need to re-think your storage
> > strategy.
> >
> > Just keep in mind that NFS is just a protocol and nothing else.
> >
> > Other storage solutions like Ceph and for example StorPool are
> > something completely different.
> >
> > Wido
> >
> >> Do I have to specify the storage to use in each compute offering or
> >> will ACS store stuff to additional primary/secondary storage
> >> 'automagically'?
> >>
> >> On 17.11.25 15:57, Alex Mattioli wrote:
> >>> Echoing what Wido said, CEPH is a platform while NFS is just a
> >>> protocol, one which CEPH can serve as well.
> >>> But I do agree, to start a simple cloud NFS is the easiest way to
> >>> go, by far.
> >>>
> >>> -----Original Message-----
> >>> From: Jürgen Gotteswinter <[email protected]>
> >>> Sent: 17 November 2025 15:36
> >>> To: [email protected]
> >>> Subject: Re: Which technology to choose for shared primary storage
> >>>
> >>> I am not so sure if Ceph is a solution which can/should be suggested
> >>> as first choice in general. NFS has its own problems, but also has
> >>> advantages. One is that its easy, reliable and most of the time
> >>> performance is more than sufficient. To get things started, NFS is
> >>> not a bad choice.
> >>>
> >>> Am 17.11.25, 15:25 schrieb "Alex Mattioli"
> >>> <[email protected] <mailto:[email protected]>>:
> >>>
> >>>
> >>> +1 on that, if I were to build a cloud from the ground up I'd 100%
> >>> use CEPH
> >>>
> >>>
> >>> -----Original Message-----
> >>> From: Wido den Hollander <[email protected]
> >>> <mailto:[email protected]>LID>
> >>> Sent: 17 November 2025 14:45
> >>> To: [email protected]
> >>> <mailto:[email protected]>; João Jandre Paraquetti
> >>> <[email protected] <mailto:[email protected]>>
> >>> Subject: Re: Which technology to choose for shared primary storage
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> Op 14-11-2025 om 19:40 schreef João Jandre Paraquetti:
> >>>> Hello, Paketix
> >>>>
> >>>> I would say that the easiest to deal with and most compatible protocol
> >>>> is NFS.
> >>>>
> >>>> The next paragraphs assume we are talking about using these storage
> >>>> types with KVM as the hypervisor.
> >>>>
> >>>> The "problem" with FC and iSCSI, when using KVM, is that you'll need
> >>>> an extra layer on top of them (a clustered FS, such as OCFS2) in order
> >>>> to add them as shared mountpoint to ACS. This adds a bit of complexity
> >>>> and I personally am not very fond of the current open source clustered
> >>>> FS options available. Regardless, most (if not all) features that are
> >>>> added to NFS are also added to shared mountpoint primary storages as
> >>>> they are basically the same from ACS's point of view.
> >>>>
> >>>> There is an option for adding FiberChannel primary storage directly,
> >>>> however, as far as I know, it only works for a specific storage
> >>>> vendor, and I doubt there are as many features for it as the other
> >>>> common options.
> >>>>
> >>>> I know that there exists the CLVM option when adding a storage, but I
> >>>> have never tested it and never seen it being discussed in the
> >>>> community.
> >>>>
> >>>> I haven't seen many new features released that target RBD (Ceph) for a
> >>>> while, but you may create a CephFS with Ceph, which could be added to
> >>>> your environment as a shared mount point as well.
> >>>>
> >>>
> >>> RBD/Ceph is very stable inside CloudStack. Not much work is needed
> >>> there as the current code has been working for a long time.
> >>>
> >>>
> >>> Comparing NFS to Ceph is Apples vs Oranges. NFS is just a protocol
> >>> where Ceph is a complete distributed storage environment with
> >>> scability and failover build-in to the whole system.
> >>>
> >>>
> >>> Ceph is more than capable to be used as primary storage underneath
> >>> CloudStack, but it's simply not NFS just like Ceph is not NFS.
> >>>
> >>>
> >>> Wido
> >>>
> >>>
> >>>> Best regards,
> >>>>
> >>>> João
> >>>>
> >>>> On 11/14/25 15:04, Paketix wrote:
> >>>>> I am new to CloudStack and could need some help/advise regarding
> >>>>> which technology to use to implement my shared storage (primary
> >>>>> storage).
> >>>>> Having some FibreChannel stuff in the lab this could work well.
> >>>>> ... but I am concerned that this is not the direction CloudStack is
> >>>>> developing into.
> >>>>> So:
> >>>>> - NFS
> >>>>> - iSCSI
> >>>>> - Ceph
> >>>>> ... would be the choices regarding the docs.
> >>>>> Not sure if iSCSI would fit for shared storage as I do not see it in
> >>>>> the list of protocols supported for primary storage in GUI.
> >>>>> What is the most future-proof solution to choose for primary storage?
> >>>>> I want to stay on the main-path CloudStack is going to, so I can use
> >>>>> new features coming out in the next months and not being blocked by
> >>>>> 'sorry, not supported for your protocol'
> >>>>>
> >>>
> >>>
> >>>
> >>>
> >
>

Reply via email to