[ceph-users] Re: Cephfs + docker

2019-09-25 Thread Patrick Hein
Hi, I am also using CephFS with Docker for the same reason you said. Also Ubuntu 18.04. I used the kernel client before Nautilus, but now FUSE, because the kernel client is to old (might work now with the newest HWE kernel). I don't have any problems at all, neither in Portainer nor any other

[ceph-users] Re: how many monitor should to deploy in a 1000+ osd cluster

2019-09-25 Thread zhanrzh...@teamsun.com.cn
Thanks for your reply. We don't maintain it frequently. My confusion is whether the more monitor is more advantage for client(osd,rbdclient...) to get clustermap. Do All clients comunicate with  one monitor  of the  cluster at the mean time ? If not  how client to decide to communicat with

[ceph-users] Re: CephFS deleted files' space not reclaimed

2019-09-25 Thread Gregory Farnum
On Mon, Sep 23, 2019 at 6:50 AM Josh Haft wrote: > > Hi, > > I've been migrating data from one EC pool to another EC pool: two > directories are mounted with ceph.dir.layout.pool file attribute set > appropriately, then rsync from old to new and finally, delete the old > files. I'm using the

[ceph-users] Announcing Ceph Buenos Aires 2019 on Oct 16th at Museo de Informatica

2019-09-25 Thread Victoria Martinez de la Cruz
Hi all, I'm happy to announce that next Oct 16th we will have the Ceph Day Argentina in Buenos Aires. The event will be held in the Museo de Informatica de Argentina, so apart from hearing the latest features from core developers, real use cases from our users and usage experiences from customers

[ceph-users] Re: Wrong %USED and MAX AVAIL stats for pool

2019-09-25 Thread Wido den Hollander
On 9/25/19 3:22 PM, nalexand...@innologica.com wrote: > Hi everyone, > > We are running Nautilus 14.2.2 with 6 nodes and a total of 44 OSDs, all are > 2TB spinning disks. > # ceph osd count-metadata osd_objectstore > "bluestore": 44 > # ceph osd pool get one size > size: 3 > # ceph df >

[ceph-users] Ceph NIC partitioning (NPAR)

2019-09-25 Thread Adrien Georget
Hi, I need your advice about the following setup. Currently, we have a Ceph nautilus cluster used by Openstack Cinder with single NIC in 10Gbps on OSD hosts. We will upgrade the cluster by adding 7 new hosts dedicated to Nova/Glance and we would like to add a cluster network to isolate

[ceph-users] Wrong %USED and MAX AVAIL stats for pool

2019-09-25 Thread nalexandrov
Hi everyone, We are running Nautilus 14.2.2 with 6 nodes and a total of 44 OSDs, all are 2TB spinning disks. # ceph osd count-metadata osd_objectstore "bluestore": 44 # ceph osd pool get one size size: 3 # ceph df RAW STORAGE: CLASS SIZE AVAIL USED RAW USED %RAW

[ceph-users] Re: OSD rebalancing issue - should drives be distributed equally over all nodes

2019-09-25 Thread Thomas
Hi Reed, I'm not sure what is meant with the grouping / chassis and "set your failure domain to chassis" respectively. This is my current crush map: # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable

[ceph-users] Re: verify_upmap number of buckets 5 exceeds desired 4

2019-09-25 Thread Eric Dold
After updating the CRUSH rule from rule cephfs_ec { id 1 type erasure min_size 8 max_size 8 step set_chooseleaf_tries 5 step set_choose_tries 100 step take default step choose indep 4 type host step choose indep 2 type osd step emit } to rule cephfs_ec {