Thanks Greg, makes sense.

Our ceph cluster currently has 16 OSDs, each with an 8TB disk.

Sounds like 32 PGs at 3x replication might be a reasonable starting point?

Thanks,

-- Dan

> On Nov 8, 2016, at 14:02, Gregory Farnum <gfar...@redhat.com> wrote:
> 
> On Tue, Nov 8, 2016 at 9:37 AM, Dan Jakubiec <dan.jakub...@gmail.com> wrote:
>> Hello,
>> 
>> Picking the number of PGs for the CephFS data pool seems straightforward, 
>> but how does one do this for the metadata pool?
>> 
>> Any rules of thumb or recommendations?
> 
> I don't think we have any good ones yet. You've got to worry about the
> log and about the backing directory objects; depending on how your map
> looks I'd just try and get enough for a decent IO distribution across
> the disks you're actually using. Given the much lower amount of
> absolute data you're less worried about balancing the data precisely
> evenly and more concerned about not accidentally driving all IO to one
> of 7 disks because you have 8 PGs, and all your supposedly-parallel
> ops are contending. ;)
> -Greg
> 
>> 
>> Thanks,
>> 
>> -- Dan Jakubiec
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to