Local Disk is not possible for HA .

If you can accept NFS, then HA is not an issue .

On Mon, Oct 12, 2020 at 2:42 PM Pratik Chandrakar <
chandrakarpra...@gmail.com> wrote:

> Hi Andrija,
> I have a similar requirement like Hean. So what's your recommendation for
> HA with NFS/Local disk?
>
>
> On Sat, Oct 10, 2020 at 8:55 AM Hean Seng <heans...@gmail.com> wrote:
>
> > Hi Andrija
> >
> > I am planning on a high end hypervisor ,  AMD EYPCv2 7742 CPU that get
> > 64core and 128thread ,   384G RAM, etc , and multiple 10G card bnond or
> 40G
> > card for storage network.
> >
> > On this kind of server, probably get up to 200 VM per hypervisor.   I'm
> > just afraid that NFS will create a bottleneck if the storage server is
> > running a  lower-end  Hardware on storage.
> >
> > For ISCSI, normally won't be an issue of hardware cpu in storage server
> and
> > it act almost like external hard disk, while NFS needs to process the
> file
> > system in Storage.
> >
> > I had read through  many articles, and mentioned GFS2 has many issues.  I
> > initially planned to run OCFS2, but it does not support REDHAT any more,
> > and there is a bug on Ubuntu18 , not sure if solved.  OCFS2 should be a
> lot
> > more stable and less issue compare GFS2
> >
> > this is ocfs2 on ubuntu bug, which i am facing exactly the same.
> > https://bugs.launchpad.net/ubuntu/+source/linux-signed/+bug/1895010
> >
> >
> >
> >
> >
> >
> > On Fri, Oct 9, 2020 at 6:41 PM Andrija Panic <andrija.pa...@gmail.com>
> > wrote:
> >
> > > free advice - try to avoid Clustered File Systems always - due to
> > > complexity, and sometimes due to the utter lack of reliability (I had,
> > > outside of ACS, an awful experience with GFS2, set by RedHat themself
> > for a
> > > former customer), etc - so Shared Mount point is to be skipped, if
> > > possible.
> > >
> > > Local disks - there are some downsides to VM live migration - so make
> > sure
> > > to understand the limits and options.
> > > iSCSI = same LUN attached to all KVM hosts = you again need Clustered
> > File
> > > System, and that will be, again, consumed as Shared Mount point.
> > >
> > > For NFS, you are on your own when it comes to the performance and
> > tunning -
> > > this is outside of ACS - usually no high CPU usage on a moderately used
> > NFS
> > > server.
> > >
> > > Best,
> > >
> > > On Thu, 8 Oct 2020 at 18:45, Hean Seng <heans...@gmail.com> wrote:
> > >
> > > > For using NFS, do you have performance issue like  Storage CPU
> getting
> > > very
> > > > high ?   And i believe this could be cause the the filesystem  is
> make
> > at
> > > > Storage instead of Compute Node.
> > > >
> > > > Thus i am thinking of is ISCSI or LocalStorage.
> > > >
> > > > For ISCSI, i prefer if can running on LVM , which i believe
> performance
> > > > shall be the best , compared localstroage where file-based.
> > > >
> > > > But facing issue of ISCSI is ShareMount point need  Clustered File
> > > System,
> > > > otherwise you can only setup one Cluster one Host.    Setting up
> > Cluster
> > > > File system is issue here,   GFS2 is no more support on CentOS /
> > Redhat,
> > > > and there is bug in Ubuntu 18.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Thu, Oct 8, 2020 at 6:54 PM Andrija Panic <
> andrija.pa...@gmail.com>
> > > > wrote:
> > > >
> > > > > NFS is the rock-solid, and majority of users are using NFS, I can
> > tell
> > > > that
> > > > > for sure.
> > > > > Do understand there is some difference between cheap white-box NFS
> > > > solution
> > > > > and a proprietary $$$ NFS solution, when it comes to performance.
> > > > >
> > > > > Some users will use Ceph, some local disks (this is all KVM so far)
> > > > > VMware users might be heavy on iSCSI datastores,
> > > > >
> > > > > And that is probably true for 99% of ACS users - rest might be
> > > > > experimenting with clustered solutions via OCFS/GFS2 (shared
> > > mountpoint)
> > > > or
> > > > > Gluster etc - but that is all not really suitable for a serious
> > > > production
> > > > > usage IMO (usually,but there might be exceptions to this).
> > > > >
> > > > > SolidFire is also a $$$ solution that works very well, depending on
> > > your
> > > > > hypervisor (best integration so far I believe is with KVM in ACS).
> > > > >
> > > > > Hope that helps
> > > > >
> > > > > On Mon, 14 Sep 2020 at 04:50, Hean Seng <heans...@gmail.com>
> wrote:
> > > > >
> > > > > > HI
> > > > > >
> > > > > > I just wonder what storage you all use for CloudStack ?  And the
> > > number
> > > > > of
> > > > > > VM  able to get  spinned up for storage you use ?
> > > > > >
> > > > > > Can anybody share the experience ?
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Regards,
> > > > > > Hean Seng
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > >
> > > > > Andrija Panić
> > > > >
> > > >
> > > >
> > > > --
> > > > Regards,
> > > > Hean Seng
> > > >
> > >
> > >
> > > --
> > >
> > > Andrija Panić
> > >
> >
> >
> > --
> > Regards,
> > Hean Seng
> >
>
>
> --
> *Regards,*
> *Pratik Chandrakar*
> Scientist-C
> NIC - Chhattisgarh State Centre
> *Hall no.-AD2-14 ,* *2nd Floor *
> Mahanadi Bhavan , Mantralaya , New Raipur
>


-- 
Regards,
Hean Seng

Reply via email to