On Fri, Oct 7, 2016 at 7:15 AM, Hauke Homburg <hhomb...@w3-creative.de> wrote:
> Hello,
>
> I have a Ceph Cluster with 5 Server, and 40 OSD. Aktual on this Cluster
> are 85GB Free Space, and the rsync dir has lots of Pictures and a Data
> Volume of 40GB.
>
> The Linux is a Centos 7 and the Last stable Ceph. The Client is a Debian
> 8 with Kernel 4 and the Cluster is with cephfs mounted.
>
> When i sync the Directory i see often the Message rsync mkstemp no space
> left on device (28). At this Point i can touch a File in anotherDiretory
> in the Cluster. In the Diretory i have ~ 630000 Files. Are this too much
> Files?

Yes, in recent releases CephFS limits you to 100k dentries in a single
directory fragment. This *includes* the "stray" directories that files
get moved into when you unlink them, and is intended to prevent issues
with very large folders. It will stop being a problem once we enable
automatic fragmenting (soon, hopefully).
You can change that by changing the "mds bal fragment size max"
config, but you're probably better off by figuring out if you've got
an over-large directory or if you're deleting files faster than the
cluster can keep up. There was a thread about this very recently and
John included some details about tuning if you check the archives. :)
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to