[ceph-users] How to set size for CephFs

2019-11-27 Thread Alokkumar Mahajan
Hello, I am new to Ceph and currently i am working on setting up CephFs and RBD environment. I have successfully setup Ceph Cluster with 4 OSD's (2 OSD's with size 50GB and 2 OSD's with size 300GB). But while setting up CephFs the size which i see allocated for CephFs Data and metadata pools is

[ceph-users] Re: Dual network board setup info

2019-11-27 Thread Konstantin Shalygin
On 11/27/19 8:04 PM, Rodrigo Severo - Fábrica wrote: I have a CephFS instance and I am also planning on also deploying an Object Storage interface. My servers have 2 network boards each. I would like to use the current local one to talk to Cephs clients (both CephFS and Object Storage) and use

[ceph-users] Re: v13.2.7 mimic released

2019-11-27 Thread Sang, Oliver
Thanks a lot for information! what’s the relationship of this mirror with ceph official website? Basically we want to use an official release and hesitate to use a 3rd part build package. From: Martin Verges Sent: Wednesday, November 27, 2019 9:58 PM To: Sang, Oliver Cc: Sage Weil ;

[ceph-users] Re: mimic 13.2.6 too much broken connexions

2019-11-27 Thread Vincent Godin
If it was a network issue, the counters should explose (as i said, with a log level of 5 on the messenger, we observed more then 80 000 lossy channels per minute) but nothing abnormal is relevant on the counters (on switchs and servers) On the switchs no drop, no crc error, no packet loss, only

[ceph-users] mimic 13.2.6 too much broken connexions

2019-11-27 Thread Vincent Godin
Till i submit the mail below few days ago, we found some clues We observed a lot of lossy connexion like : ceph-osd.9.log:2019-11-27 11:03:49.369 7f6bb77d0700 0 -- 192.168.4.181:6818/2281415 >> 192.168.4.41:0/1962809518 conn(0x563979a9f600 :6818 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0

[ceph-users] Re: v13.2.7 mimic released

2019-11-27 Thread Martin Verges
Hello, as far I know Mimic and nautilus are still not available on debian. Unfortunately we do not provide mimic on our mirror for debian 10 buster. But if you want to migrate to nautilus, feel free to use our public mirrors described at https://croit.io/2019/07/07/2019-07-07-debian-mirror. --

[ceph-users] Re: EC PGs stuck activating, 2^31-1 as OSD ID, automatic recovery not kicking in

2019-11-27 Thread Aleksey Gutikov
On 22.11.19 23:45, Paul Emmerich wrote: tools), it means no mapping could be found; check your crush map and crush rule Most simple way to get into this state is to change OSDs' reweight on small cluster where number of OSDs equal to EC n+k. I do not know exactly, but seems that straw2 crush

[ceph-users] Re: v13.2.7 mimic released

2019-11-27 Thread Sang, Oliver
can this version be installed on Debian 10? If not, is there a plan for Mimic to support Debian 10? -Original Message- From: Sage Weil Sent: Monday, November 25, 2019 10:50 PM To: ceph-annou...@ceph.io; ceph-users@ceph.io; d...@ceph.io Subject: [ceph-users] v13.2.7 mimic released This

[ceph-users] Dual network board setup info

2019-11-27 Thread Rodrigo Severo - Fábrica
Hi, I have a CephFS instance and I am also planning on also deploying an Object Storage interface. My servers have 2 network boards each. I would like to use the current local one to talk to Cephs clients (both CephFS and Object Storage) and use the second one to all Cephs processes to talk one

[ceph-users] Re: why osd's heartbeat partner comes from another root tree?

2019-11-27 Thread zijian1012
Ok, given the bad format, I put it here https://onlinenotepad.us/T8Kh9oZVNd ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] why osd's heartbeat partner comes from another root tree?

2019-11-27 Thread opengers
Hello: According to my understanding, osd's heartbeat partners only come from those osds who assume the same pg See below(# ceph osd tree), osd.10 and osd.0-6 cannot assume the same pg, because osd.10 and osd.0-6 are from different root tree, and pg in my cluster doesn't map across root trees(#

[ceph-users] Re: EC pool used space high

2019-11-27 Thread Erdem Agaoglu
Hi again, Even with this, our 6+3 EC pool with default bluestore_min_alloc_size 64KiB filled with 4MiB RBD objects should not take 1.67x space. It should be around 1.55x. There still is a 12% un-accounted overhead. Could there be something else too? Best, On Tue, Nov 26, 2019 at 8:08 PM Serkan