For using NFS, do you have performance issue like  Storage CPU getting very
high ?   And i believe this could be cause the the filesystem  is make at
Storage instead of Compute Node.

Thus i am thinking of is ISCSI or LocalStorage.

For ISCSI, i prefer if can running on LVM , which i believe performance
shall be the best , compared localstroage where file-based.

But facing issue of ISCSI is ShareMount point need  Clustered File System,
otherwise you can only setup one Cluster one Host.    Setting up Cluster
File system is issue here,   GFS2 is no more support on CentOS / Redhat,
and there is bug in Ubuntu 18.





On Thu, Oct 8, 2020 at 6:54 PM Andrija Panic <andrija.pa...@gmail.com>
wrote:

> NFS is the rock-solid, and majority of users are using NFS, I can tell that
> for sure.
> Do understand there is some difference between cheap white-box NFS solution
> and a proprietary $$$ NFS solution, when it comes to performance.
>
> Some users will use Ceph, some local disks (this is all KVM so far)
> VMware users might be heavy on iSCSI datastores,
>
> And that is probably true for 99% of ACS users - rest might be
> experimenting with clustered solutions via OCFS/GFS2 (shared mountpoint) or
> Gluster etc - but that is all not really suitable for a serious production
> usage IMO (usually,but there might be exceptions to this).
>
> SolidFire is also a $$$ solution that works very well, depending on  your
> hypervisor (best integration so far I believe is with KVM in ACS).
>
> Hope that helps
>
> On Mon, 14 Sep 2020 at 04:50, Hean Seng <heans...@gmail.com> wrote:
>
> > HI
> >
> > I just wonder what storage you all use for CloudStack ?  And the number
> of
> > VM  able to get  spinned up for storage you use ?
> >
> > Can anybody share the experience ?
> >
> >
> >
> > --
> > Regards,
> > Hean Seng
> >
>
>
> --
>
> Andrija Panić
>


-- 
Regards,
Hean Seng

Reply via email to