Re: [ceph-users] inline_data (was: CephFS and many small files)

2019-04-02 Thread Clausen , Jörn
Hi! Am 29.03.2019 um 23:56 schrieb Paul Emmerich: There's also some metadata overhead etc. You might want to consider enabling inline data in cephfs to handle small files in a store-efficient way (note that this feature is officially marked as experimental, though).

Re: [ceph-users] CephFS and many small files

2019-04-01 Thread Clausen , Jörn
Hi Paul! Thanks for your answer. Yep, bluestore_min_alloc_size and your calculation sounds very reasonable to me :) Am 29.03.2019 um 23:56 schrieb Paul Emmerich: Are you running on HDDs? The minimum allocation size is 64kb by default here. You can control that via the parameter

[ceph-users] CephFS and many small files

2019-03-29 Thread Clausen , Jörn
Hi! In my ongoing quest to wrap my head around Ceph, I created a CephFS (data and metadata pool with replicated size 3, 128 pgs each). When I mount it on my test client, I see a usable space of ~500 GB, which I guess is okay for the raw capacity of 1.6 TiB I have in my OSDs. I run bonnie

Re: [ceph-users] 1/3 mon not working after upgrade to Nautilus

2019-03-25 Thread Clausen , Jörn
Hi! Am 25.03.2019 um 15:07 schrieb Brian Topping: Did you check port access from other nodes? My guess is a forgotten firewall re-emerged on that node after reboot. I am pretty sure it's not the firewall. To be extra sure, I switched it off for testing. I found this in the mon-logs: On

Re: [ceph-users] 1/3 mon not working after upgrade to Nautilus

2019-03-25 Thread Clausen , Jörn
Hi again! moment, one of my three MONs (the then active one) fell out of the "active one" is of course nonsense, I confused it with MGRs. Which are running okay, btw, on the same three hosts. I reverted the MON back to a snapshot (vSphere) before the upgrade, repeated the upgrade, and

[ceph-users] 1/3 mon not working after upgrade to Nautilus

2019-03-25 Thread Clausen , Jörn
Hi! I just tried upgrading my test cluster from Mimic (13.2.5) to Nautilus (14.2.0), and everything looked fine. Until I activated msgr2. At that moment, one of my three MONs (the then active one) fell out of the quorum and refuses to join back. The two other MONs seem to work fine.

[ceph-users] min_size vs. K in erasure coded pools

2019-02-20 Thread Clausen , Jörn
Hi! While trying to understand erasure coded pools, I would have expected that "min_size" of a pool is equal to the "K" parameter. But it turns out, that it is always K+1. Isn't the description of erasure coding misleading then? In a K+M setup, I would expect to be good (in the sense of "no

Re: [ceph-users] systemd/rbdmap.service

2019-02-13 Thread Clausen , Jörn
Thanks, I wasn't are of that mount option. If this is more intuitive to me (i.e. the lesser violation of the Principle of Least Surprise from my point of view) is a whole different matter... Am 13.02.2019 um 11:07 schrieb Marc Roos: Maybe _netdev? /dev/rbd/rbd/influxdb /var/lib/influxdb

[ceph-users] systemd/rbdmap.service

2019-02-13 Thread Clausen , Jörn
Hi! I am new to Ceph, Linux, systemd and all that stuff. I have set up a test/toy Ceph installation using ceph-ansible, and now try to understand RBD. My RBD client has a correct /etc/ceph/rbdmap, i.e. /dev/rbd0 is created during system boot automatically. But adding an entry to /etc/fstab