Am 10.10.2016 um 10:05 schrieb Hauke Homburg:
> Am 07.10.2016 um 17:37 schrieb Gregory Farnum:
>> On Fri, Oct 7, 2016 at 7:15 AM, Hauke Homburg <hhomb...@w3-creative.de> 
>> wrote:
>>> Hello,
>>>
>>> I have a Ceph Cluster with 5 Server, and 40 OSD. Aktual on this Cluster
>>> are 85GB Free Space, and the rsync dir has lots of Pictures and a Data
>>> Volume of 40GB.
>>>
>>> The Linux is a Centos 7 and the Last stable Ceph. The Client is a Debian
>>> 8 with Kernel 4 and the Cluster is with cephfs mounted.
>>>
>>> When i sync the Directory i see often the Message rsync mkstemp no space
>>> left on device (28). At this Point i can touch a File in anotherDiretory
>>> in the Cluster. In the Diretory i have ~ 630000 Files. Are this too much
>>> Files?
>> Yes, in recent releases CephFS limits you to 100k dentries in a single
>> directory fragment. This *includes* the "stray" directories that files
>> get moved into when you unlink them, and is intended to prevent issues
>> with very large folders. It will stop being a problem once we enable
>> automatic fragmenting (soon, hopefully).
>> You can change that by changing the "mds bal fragment size max"
>> config, but you're probably better off by figuring out if you've got
>> an over-large directory or if you're deleting files faster than the
>> cluster can keep up. There was a thread about this very recently and
>> John included some details about tuning if you check the archives. :)
>> -Greg
> Hello,
>
> Thanks for the answer.
> I enabled on the Cluster the mds bal frag = true Options.
>
> Today i read that i have to enable this option on the Client, too. With
> a Fuse mount i can do it with the ceph Binary. I use the Kernel Module.
> How can i do it there?
>
> Regards
>
> Hauke
>

Hello,

After some Discussion in our Team we have deleted die Cephfs and
switched to rados with ext4.

Now we want to realive the Setup:

1 Ceph Cluster Jewel 10.0.2.3
5 Server with Ceph 10.0.2.3 Client with rados installed. We map all 5
Mons of our Cluster into every rbd map call. To have a Failover.

Aktually we have the Problem, that we can store Data into The Cluster
with rsync, but when rsync is deleting Files, ext4 becomes Filesystem
errors.

I undersstand Ceph with rbd so, that i can use Ceph als Cluster
Filesystem like ocfs2. So i don't unterstand, why i have Filesytem Errors.

I read in some Postings here, Ceph needs Filesystem Locking like DLM. Is
this true? In the aktual Version Jewel? Doesn't do libceph this Locking?

Thanks for Help

Hauke

-- 
www.w3-creative.de

www.westchat.de


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to