Hello thanks for the answer

But is there any hard coded limit ? Like in zfs ?
Maybe a limit to the maximum files a cephfs can have ?

All the best

Arnaud

Le lun. 20 juin 2022 à 10:18, Serkan Çoban <cobanser...@gmail.com> a écrit :

> Currently the biggest HDD is 20TB. 1 exabyte means 50.000 OSD
> cluster(without replication or EC)
> AFAIK Cern did some tests using 5000 OSDs. I don't know any larger
> clusters than Cern's.
> So I am not saying it is impossible but it is very unlikely to grow a
> single Ceph cluster to that size.
> Maybe you should search for alternatives, like hdfs which I
> know/worked with more than 50.000 HDDs without problems.
>
> On Mon, Jun 20, 2022 at 10:46 AM Arnaud M <arnaud.meauzo...@gmail.com>
> wrote:
> >
> > Hello to everyone
> >
> > I have looked on the internet but couldn't find an answer.
> > Do you know the maximum size of a ceph filesystem ? Not the max size of a
> > single file but the limit of the whole filesystem ?
> >
> > For example a quick search on zfs on google output :
> > A ZFS file system can store up to *256 quadrillion zettabytes* (ZB).
> >
> > I would like to have the same answer with cephfs.
> >
> > And if there is a limit, where is this limit coded ? Is it hard-coded or
> is
> > it configurable ?
> >
> > Let's say someone wants to have a cephfs up to ExaByte, would it be
> > completely foolish or would the system, given enough mds and servers and
> > everything needed, be usable ?
> >
> > Is there any other limit to a ceph filesystem ?
> >
> > All the best
> >
> > Arnaud
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to