free advice - try to avoid Clustered File Systems always - due to complexity, and sometimes due to the utter lack of reliability (I had, outside of ACS, an awful experience with GFS2, set by RedHat themself for a former customer), etc - so Shared Mount point is to be skipped, if possible.
Local disks - there are some downsides to VM live migration - so make sure to understand the limits and options. iSCSI = same LUN attached to all KVM hosts = you again need Clustered File System, and that will be, again, consumed as Shared Mount point. For NFS, you are on your own when it comes to the performance and tunning - this is outside of ACS - usually no high CPU usage on a moderately used NFS server. Best, On Thu, 8 Oct 2020 at 18:45, Hean Seng <heans...@gmail.com> wrote: > For using NFS, do you have performance issue like Storage CPU getting very > high ? And i believe this could be cause the the filesystem is make at > Storage instead of Compute Node. > > Thus i am thinking of is ISCSI or LocalStorage. > > For ISCSI, i prefer if can running on LVM , which i believe performance > shall be the best , compared localstroage where file-based. > > But facing issue of ISCSI is ShareMount point need Clustered File System, > otherwise you can only setup one Cluster one Host. Setting up Cluster > File system is issue here, GFS2 is no more support on CentOS / Redhat, > and there is bug in Ubuntu 18. > > > > > > On Thu, Oct 8, 2020 at 6:54 PM Andrija Panic <andrija.pa...@gmail.com> > wrote: > > > NFS is the rock-solid, and majority of users are using NFS, I can tell > that > > for sure. > > Do understand there is some difference between cheap white-box NFS > solution > > and a proprietary $$$ NFS solution, when it comes to performance. > > > > Some users will use Ceph, some local disks (this is all KVM so far) > > VMware users might be heavy on iSCSI datastores, > > > > And that is probably true for 99% of ACS users - rest might be > > experimenting with clustered solutions via OCFS/GFS2 (shared mountpoint) > or > > Gluster etc - but that is all not really suitable for a serious > production > > usage IMO (usually,but there might be exceptions to this). > > > > SolidFire is also a $$$ solution that works very well, depending on your > > hypervisor (best integration so far I believe is with KVM in ACS). > > > > Hope that helps > > > > On Mon, 14 Sep 2020 at 04:50, Hean Seng <heans...@gmail.com> wrote: > > > > > HI > > > > > > I just wonder what storage you all use for CloudStack ? And the number > > of > > > VM able to get spinned up for storage you use ? > > > > > > Can anybody share the experience ? > > > > > > > > > > > > -- > > > Regards, > > > Hean Seng > > > > > > > > > -- > > > > Andrija Panić > > > > > -- > Regards, > Hean Seng > -- Andrija Panić