Sorry.  I meant SSD Solid state disks.

Thanks,
Gagan

On Wed, Sep 14, 2022 at 12:49 PM Janne Johansson <icepic...@gmail.com>
wrote:

> Den ons 14 sep. 2022 kl 08:54 skrev gagan tiwari
> <gagan.tiw...@mathisys-india.com>:
> > Hi Guys,
> >                 I am new to Ceph and storage. We have a requirement of
> > managing around 40T of data which will be accessed by around 100 clients
> > all running RockyLinux9.
> >
> > We have a HP storage server with 12 SDD of 5T each and have set-up
> hardware
> > RAID6 on these disks.
>
> You have only one single machine?
> If so, run zfs on it and export storage as NFS.
>
> >  HP storage server has 64G RAM and 18 cores.
> >
> > So, please advise how I should go about setting up Ceph on it to have
> best
> > read performance. We need fastest read performance.
>
> With NFSv4.x you can have local caching in the NFS client, that might
> help a lot for read perf if those 100 clients have local drives also.
>
> The reason I am not advocating ceph in this case is that ceph is built
> to have many servers feed data to many clients (or many processes
> doing separate reads) and you seem to have a "single-server" setup and
> in this case, the overhead of the ceph protocol will lower the
> performance compared to "simpler" solutions like NFS which are not
> designed to scale in the way ceph is.
>
> A smaller point is that for both zfs and ceph, it is not advisable to
> first raid the separate drives and then present them to the
> filesystem/network, but rather give zfs/ceph each individual disk to
> handle it at a higher level. But compared to the "I have one or I have
> many servers to serve file IO" it is a small thing.
>
> --
> May the most significant bit of your life be positive.
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to