Re: [ceph-users] pools limit

2019-07-26 Thread M Ranga Swami Reddy
ceph using for RBD only

On Wed, Jul 24, 2019 at 12:55 PM Wido den Hollander  wrote:

>
>
> On 7/16/19 6:53 PM, M Ranga Swami Reddy wrote:
> > Thanks for your reply..
> > Here, new pool creations and pg auto scale may cause rebalance..which
> > impact the ceph cluster performance..
> >
> > Please share name space detail like how to use etc
> >
>
> Would it be RBD, Rados, CephFS? What would you be using on top of Ceph?
>
> Wido
>
> >
> >
> > On Tue, 16 Jul, 2019, 9:30 PM Paul Emmerich,  > > wrote:
> >
> > 100+ pools work fine if you can get the PG count right (auto-scaler
> > helps, there are some options that you'll need to tune for small-ish
> > pools).
> >
> > But it's not a "nice" setup. Have you considered using namespaces
> > instead?
> >
> >
> > Paul
> >
> > --
> > Paul Emmerich
> >
> > Looking for help with your Ceph cluster? Contact us at
> https://croit.io
> >
> > croit GmbH
> > Freseniusstr. 31h
> > 81247 München
> > www.croit.io 
> > Tel: +49 89 1896585 90
> >
> >
> > On Tue, Jul 16, 2019 at 4:17 PM M Ranga Swami Reddy
> > mailto:swamire...@gmail.com>> wrote:
> >
> > Hello - I have created 10 nodes ceph cluster with 14.x version.
> > Can you please confirm below:
> >
> > Q1 - Can I create 100+ pool (or more) on the cluster? (the
> > reason is - creating a pool per project). Any limitation on pool
> > creation?
> >
> > Q2 - In the above pool - I use 128 PG-NUM - to start with and
> > enable autoscale for PG_NUM, so that based on the data in the
> > pool, PG_NUM will increase by ceph itself.
> >
> > Let me know if any limitations for the above and any fore see
> issue?
> >
> > Thanks
> > Swami
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com 
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] pools limit

2019-07-24 Thread Wido den Hollander


On 7/16/19 6:53 PM, M Ranga Swami Reddy wrote:
> Thanks for your reply..
> Here, new pool creations and pg auto scale may cause rebalance..which
> impact the ceph cluster performance..
> 
> Please share name space detail like how to use etc
> 

Would it be RBD, Rados, CephFS? What would you be using on top of Ceph?

Wido

> 
> 
> On Tue, 16 Jul, 2019, 9:30 PM Paul Emmerich,  > wrote:
> 
> 100+ pools work fine if you can get the PG count right (auto-scaler
> helps, there are some options that you'll need to tune for small-ish
> pools).
> 
> But it's not a "nice" setup. Have you considered using namespaces
> instead?
> 
> 
> Paul
> 
> -- 
> Paul Emmerich
> 
> Looking for help with your Ceph cluster? Contact us at https://croit.io
> 
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io 
> Tel: +49 89 1896585 90
> 
> 
> On Tue, Jul 16, 2019 at 4:17 PM M Ranga Swami Reddy
> mailto:swamire...@gmail.com>> wrote:
> 
> Hello - I have created 10 nodes ceph cluster with 14.x version.
> Can you please confirm below:
> 
> Q1 - Can I create 100+ pool (or more) on the cluster? (the
> reason is - creating a pool per project). Any limitation on pool
> creation?
> 
> Q2 - In the above pool - I use 128 PG-NUM - to start with and
> enable autoscale for PG_NUM, so that based on the data in the
> pool, PG_NUM will increase by ceph itself.
> 
> Let me know if any limitations for the above and any fore see issue?
> 
> Thanks
> Swami
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] pools limit

2019-07-18 Thread M Ranga Swami Reddy
Hi - I can start using the 64 PGs...as  Iam having 10 nodes - with 18 OSDs
per node..

On Tue, Jul 16, 2019 at 9:01 PM Janne Johansson  wrote:

> Den tis 16 juli 2019 kl 16:16 skrev M Ranga Swami Reddy <
> swamire...@gmail.com>:
>
>> Hello - I have created 10 nodes ceph cluster with 14.x version. Can you
>> please confirm below:
>> Q1 - Can I create 100+ pool (or more) on the cluster? (the reason is -
>> creating a pool per project). Any limitation on pool creation?
>>
>> Q2 - In the above pool - I use 128 PG-NUM - to start with and enable
>> autoscale for PG_NUM, so that based on the data in the pool, PG_NUM will
>> increase by ceph itself.
>>
>>
> 12800 PGs in total might be a bit much, depending on how many OSDs you
> have in total for these pools. OSDs aim for something like ~100 PGs per OSD
> at most, so for 12800 PGs in total, times 3 for replication=3 makes it
> necessary to have quite many OSDs per host. I guess the autoscaler might be
> working downwards for your pools instead of upwards. There is nothing wrong
> with starting with PG_NUM 8 or so, and have autoscaler increase the pools
> that actually do get a lot of data.
>
> 100 pools * repl = 3 * pg_num 8 => 2400 PGs, which is fine for 24 OSDs but
> would need more OSDs as some of those pools grow in data/objects.
>
> 100 * 3 * 128 => 38400 PGs, which requires 384 OSDs, or close to 40 OSDs
> per host in your setup. That might become a limiting factor in itself,
> sticking so many OSDs in a single box.
>
> --
> May the most significant bit of your life be positive.
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] pools limit

2019-07-16 Thread M Ranga Swami Reddy
Thanks for your reply..
Here, new pool creations and pg auto scale may cause rebalance..which
impact the ceph cluster performance..

Please share name space detail like how to use etc



On Tue, 16 Jul, 2019, 9:30 PM Paul Emmerich,  wrote:

> 100+ pools work fine if you can get the PG count right (auto-scaler helps,
> there are some options that you'll need to tune for small-ish pools).
>
> But it's not a "nice" setup. Have you considered using namespaces instead?
>
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
>
> On Tue, Jul 16, 2019 at 4:17 PM M Ranga Swami Reddy 
> wrote:
>
>> Hello - I have created 10 nodes ceph cluster with 14.x version. Can you
>> please confirm below:
>>
>> Q1 - Can I create 100+ pool (or more) on the cluster? (the reason is -
>> creating a pool per project). Any limitation on pool creation?
>>
>> Q2 - In the above pool - I use 128 PG-NUM - to start with and enable
>> autoscale for PG_NUM, so that based on the data in the pool, PG_NUM will
>> increase by ceph itself.
>>
>> Let me know if any limitations for the above and any fore see issue?
>>
>> Thanks
>> Swami
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] pools limit

2019-07-16 Thread Paul Emmerich
100+ pools work fine if you can get the PG count right (auto-scaler helps,
there are some options that you'll need to tune for small-ish pools).

But it's not a "nice" setup. Have you considered using namespaces instead?


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Tue, Jul 16, 2019 at 4:17 PM M Ranga Swami Reddy 
wrote:

> Hello - I have created 10 nodes ceph cluster with 14.x version. Can you
> please confirm below:
>
> Q1 - Can I create 100+ pool (or more) on the cluster? (the reason is -
> creating a pool per project). Any limitation on pool creation?
>
> Q2 - In the above pool - I use 128 PG-NUM - to start with and enable
> autoscale for PG_NUM, so that based on the data in the pool, PG_NUM will
> increase by ceph itself.
>
> Let me know if any limitations for the above and any fore see issue?
>
> Thanks
> Swami
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] pools limit

2019-07-16 Thread Janne Johansson
Den tis 16 juli 2019 kl 16:16 skrev M Ranga Swami Reddy <
swamire...@gmail.com>:

> Hello - I have created 10 nodes ceph cluster with 14.x version. Can you
> please confirm below:
> Q1 - Can I create 100+ pool (or more) on the cluster? (the reason is -
> creating a pool per project). Any limitation on pool creation?
>
> Q2 - In the above pool - I use 128 PG-NUM - to start with and enable
> autoscale for PG_NUM, so that based on the data in the pool, PG_NUM will
> increase by ceph itself.
>
>
12800 PGs in total might be a bit much, depending on how many OSDs you have
in total for these pools. OSDs aim for something like ~100 PGs per OSD at
most, so for 12800 PGs in total, times 3 for replication=3 makes it
necessary to have quite many OSDs per host. I guess the autoscaler might be
working downwards for your pools instead of upwards. There is nothing wrong
with starting with PG_NUM 8 or so, and have autoscaler increase the pools
that actually do get a lot of data.

100 pools * repl = 3 * pg_num 8 => 2400 PGs, which is fine for 24 OSDs but
would need more OSDs as some of those pools grow in data/objects.

100 * 3 * 128 => 38400 PGs, which requires 384 OSDs, or close to 40 OSDs
per host in your setup. That might become a limiting factor in itself,
sticking so many OSDs in a single box.

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com