On Thu, Mar 16, 2017 at 11:12 AM, TYLin <wooer...@gmail.com> wrote:
> Hi all,
>
> We have a CephFS which its metadata pool and data pool share same set of 
> OSDs. According to the PGs calculation:
>
> (100*num_osds) / num_replica

That guideline tells you rougly how many PGs you want in total -- when
you have multiple pools you need to share it out between them.

As you suggest, it is probably sensible to use a smaller number of PGs
for the metadata pool than for the data pool.  I would be tempted to
try something like an 80:20 ratio perhaps, that's just off the top of
my head.

Remember that there is no special prioritisation for metadata traffic
over data traffic on the OSDs, so if you're mixing them together on
the same drives, then you may see MDS slowdown if your clients
saturate the system with data writes.  The alternative is to dedicate
some SSD OSDs for metadata.

John


>
> If we have 56 OSDs, we should set 5120 PGs to each pool to make the data 
> evenly distributed to all the OSDs. However, if we set metadata pool and data 
> pool to both 5120 there will have warning about “too many pgs”. We currently 
> set 2048 to metadata pool and data pool but it seems data may not evenly 
> distribute to OSDs due to no sufficient PGs. Can we set a smaller PGs to 
> metadata pool and larger PGs to data pool? E.g. 1024 pg to metadata and 4096 
> to data pool. Is there a recommend ratio? Will this result in any performance 
> issue?
>
> Thanks,
> Tim
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to