Hi  Mauro Ferrano,

The High CPU is high for Storage Server or Compute Hypervisor ?

CEPH storage, require quite a big amount of server in order to deliver .
And it is not able to utilize raid card for performance , especially when
it run on all pure SSD,  CEPH seems not able to deliver the expected result
and IO.



On Wed, Oct 14, 2020 at 9:58 AM Ivan Kudryavtsev <i...@bw-sw.com> wrote:

> Hi, hypervisor restrictions configured for SO allows limiting iops, bps for
> NFS as well as for another storage, because they are enforced by qemu.
>
> ср, 14 окт. 2020 г., 01:53 Hean Seng <heans...@gmail.com>:
>
> > Hi
> >
> > Do anybody know NFS implementation of Primary storage can support QOS for
> > IOPs in Services Offering ?  I
> >
> > On Mon, Oct 12, 2020 at 8:20 PM Pratik Chandrakar <
> > chandrakarpra...@gmail.com> wrote:
> >
> > > I was asking for storage layer instead of VM.
> > >
> > > On Mon, Oct 12, 2020 at 12:36 PM Hean Seng <heans...@gmail.com> wrote:
> > >
> > > > Local Disk is not possible for HA .
> > > >
> > > > If you can accept NFS, then HA is not an issue .
> > > >
> > > > On Mon, Oct 12, 2020 at 2:42 PM Pratik Chandrakar <
> > > > chandrakarpra...@gmail.com> wrote:
> > > >
> > > > > Hi Andrija,
> > > > > I have a similar requirement like Hean. So what's your
> recommendation
> > > for
> > > > > HA with NFS/Local disk?
> > > > >
> > > > >
> > > > > On Sat, Oct 10, 2020 at 8:55 AM Hean Seng <heans...@gmail.com>
> > wrote:
> > > > >
> > > > > > Hi Andrija
> > > > > >
> > > > > > I am planning on a high end hypervisor ,  AMD EYPCv2 7742 CPU
> that
> > > get
> > > > > > 64core and 128thread ,   384G RAM, etc , and multiple 10G card
> > bnond
> > > or
> > > > > 40G
> > > > > > card for storage network.
> > > > > >
> > > > > > On this kind of server, probably get up to 200 VM per hypervisor.
> > >  I'm
> > > > > > just afraid that NFS will create a bottleneck if the storage
> server
> > > is
> > > > > > running a  lower-end  Hardware on storage.
> > > > > >
> > > > > > For ISCSI, normally won't be an issue of hardware cpu in storage
> > > server
> > > > > and
> > > > > > it act almost like external hard disk, while NFS needs to process
> > the
> > > > > file
> > > > > > system in Storage.
> > > > > >
> > > > > > I had read through  many articles, and mentioned GFS2 has many
> > > > issues.  I
> > > > > > initially planned to run OCFS2, but it does not support REDHAT
> any
> > > > more,
> > > > > > and there is a bug on Ubuntu18 , not sure if solved.  OCFS2
> should
> > > be a
> > > > > lot
> > > > > > more stable and less issue compare GFS2
> > > > > >
> > > > > > this is ocfs2 on ubuntu bug, which i am facing exactly the same.
> > > > > >
> > https://bugs.launchpad.net/ubuntu/+source/linux-signed/+bug/1895010
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Fri, Oct 9, 2020 at 6:41 PM Andrija Panic <
> > > andrija.pa...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > free advice - try to avoid Clustered File Systems always - due
> to
> > > > > > > complexity, and sometimes due to the utter lack of reliability
> (I
> > > > had,
> > > > > > > outside of ACS, an awful experience with GFS2, set by RedHat
> > > themself
> > > > > > for a
> > > > > > > former customer), etc - so Shared Mount point is to be skipped,
> > if
> > > > > > > possible.
> > > > > > >
> > > > > > > Local disks - there are some downsides to VM live migration -
> so
> > > make
> > > > > > sure
> > > > > > > to understand the limits and options.
> > > > > > > iSCSI = same LUN attached to all KVM hosts = you again need
> > > Clustered
> > > > > > File
> > > > > > > System, and that will be, again, consumed as Shared Mount
> point.
> > > > > > >
> > > > > > > For NFS, you are on your own when it comes to the performance
> and
> > > > > > tunning -
> > > > > > > this is outside of ACS - usually no high CPU usage on a
> > moderately
> > > > used
> > > > > > NFS
> > > > > > > server.
> > > > > > >
> > > > > > > Best,
> > > > > > >
> > > > > > > On Thu, 8 Oct 2020 at 18:45, Hean Seng <heans...@gmail.com>
> > wrote:
> > > > > > >
> > > > > > > > For using NFS, do you have performance issue like  Storage
> CPU
> > > > > getting
> > > > > > > very
> > > > > > > > high ?   And i believe this could be cause the the filesystem
> > is
> > > > > make
> > > > > > at
> > > > > > > > Storage instead of Compute Node.
> > > > > > > >
> > > > > > > > Thus i am thinking of is ISCSI or LocalStorage.
> > > > > > > >
> > > > > > > > For ISCSI, i prefer if can running on LVM , which i believe
> > > > > performance
> > > > > > > > shall be the best , compared localstroage where file-based.
> > > > > > > >
> > > > > > > > But facing issue of ISCSI is ShareMount point need  Clustered
> > > File
> > > > > > > System,
> > > > > > > > otherwise you can only setup one Cluster one Host.    Setting
> > up
> > > > > > Cluster
> > > > > > > > File system is issue here,   GFS2 is no more support on
> CentOS
> > /
> > > > > > Redhat,
> > > > > > > > and there is bug in Ubuntu 18.
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > On Thu, Oct 8, 2020 at 6:54 PM Andrija Panic <
> > > > > andrija.pa...@gmail.com>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > NFS is the rock-solid, and majority of users are using
> NFS, I
> > > can
> > > > > > tell
> > > > > > > > that
> > > > > > > > > for sure.
> > > > > > > > > Do understand there is some difference between cheap
> > white-box
> > > > NFS
> > > > > > > > solution
> > > > > > > > > and a proprietary $$$ NFS solution, when it comes to
> > > performance.
> > > > > > > > >
> > > > > > > > > Some users will use Ceph, some local disks (this is all KVM
> > so
> > > > far)
> > > > > > > > > VMware users might be heavy on iSCSI datastores,
> > > > > > > > >
> > > > > > > > > And that is probably true for 99% of ACS users - rest might
> > be
> > > > > > > > > experimenting with clustered solutions via OCFS/GFS2
> (shared
> > > > > > > mountpoint)
> > > > > > > > or
> > > > > > > > > Gluster etc - but that is all not really suitable for a
> > serious
> > > > > > > > production
> > > > > > > > > usage IMO (usually,but there might be exceptions to this).
> > > > > > > > >
> > > > > > > > > SolidFire is also a $$$ solution that works very well,
> > > depending
> > > > on
> > > > > > > your
> > > > > > > > > hypervisor (best integration so far I believe is with KVM
> in
> > > > ACS).
> > > > > > > > >
> > > > > > > > > Hope that helps
> > > > > > > > >
> > > > > > > > > On Mon, 14 Sep 2020 at 04:50, Hean Seng <
> heans...@gmail.com>
> > > > > wrote:
> > > > > > > > >
> > > > > > > > > > HI
> > > > > > > > > >
> > > > > > > > > > I just wonder what storage you all use for CloudStack ?
> > And
> > > > the
> > > > > > > number
> > > > > > > > > of
> > > > > > > > > > VM  able to get  spinned up for storage you use ?
> > > > > > > > > >
> > > > > > > > > > Can anybody share the experience ?
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > Regards,
> > > > > > > > > > Hean Seng
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > Andrija Panić
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > Regards,
> > > > > > > > Hean Seng
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > Andrija Panić
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Regards,
> > > > > > Hean Seng
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > *Regards,*
> > > > > *Pratik Chandrakar*
> > > > > Scientist-C
> > > > > NIC - Chhattisgarh State Centre
> > > > > *Hall no.-AD2-14 ,* *2nd Floor *
> > > > > Mahanadi Bhavan , Mantralaya , New Raipur
> > > > >
> > > >
> > > >
> > > > --
> > > > Regards,
> > > > Hean Seng
> > > >
> > >
> > >
> > > --
> > > *Regards,*
> > > *Pratik Chandrakar*
> > >
> >
> >
> > --
> > Regards,
> > Hean Seng
> >
>


-- 
Regards,
Hean Seng

Reply via email to