ceph using for RBD only

On Wed, Jul 24, 2019 at 12:55 PM Wido den Hollander <w...@42on.com> wrote:

>
>
> On 7/16/19 6:53 PM, M Ranga Swami Reddy wrote:
> > Thanks for your reply..
> > Here, new pool creations and pg auto scale may cause rebalance..which
> > impact the ceph cluster performance..
> >
> > Please share name space detail like how to use etc
> >
>
> Would it be RBD, Rados, CephFS? What would you be using on top of Ceph?
>
> Wido
>
> >
> >
> > On Tue, 16 Jul, 2019, 9:30 PM Paul Emmerich, <paul.emmer...@croit.io
> > <mailto:paul.emmer...@croit.io>> wrote:
> >
> >     100+ pools work fine if you can get the PG count right (auto-scaler
> >     helps, there are some options that you'll need to tune for small-ish
> >     pools).
> >
> >     But it's not a "nice" setup. Have you considered using namespaces
> >     instead?
> >
> >
> >     Paul
> >
> >     --
> >     Paul Emmerich
> >
> >     Looking for help with your Ceph cluster? Contact us at
> https://croit.io
> >
> >     croit GmbH
> >     Freseniusstr. 31h
> >     81247 München
> >     www.croit.io <http://www.croit.io>
> >     Tel: +49 89 1896585 90
> >
> >
> >     On Tue, Jul 16, 2019 at 4:17 PM M Ranga Swami Reddy
> >     <swamire...@gmail.com <mailto:swamire...@gmail.com>> wrote:
> >
> >         Hello - I have created 10 nodes ceph cluster with 14.x version.
> >         Can you please confirm below:
> >
> >         Q1 - Can I create 100+ pool (or more) on the cluster? (the
> >         reason is - creating a pool per project). Any limitation on pool
> >         creation?
> >
> >         Q2 - In the above pool - I use 128 PG-NUM - to start with and
> >         enable autoscale for PG_NUM, so that based on the data in the
> >         pool, PG_NUM will increase by ceph itself.
> >
> >         Let me know if any limitations for the above and any fore see
> issue?
> >
> >         Thanks
> >         Swami
> >         _______________________________________________
> >         ceph-users mailing list
> >         ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> >         http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to