[ceph-users] Re: [Suspicious newsletter] In theory - would 'cephfs root' out-perform 'rbd root'?

2021-06-11 Thread Szabo, Istvan (Agoda)
Not really clear for me to be ho mnest how many cephfs is needed? Would it worth to create multiple or what is the use case to create multiple? In the examples and how people using it seems like only 1 cephfs + 1 metadata pool on nvme, not really multiple cephfs. Doc just relates that if you wa

[ceph-users] Re: CephFS design

2021-06-11 Thread Szabo, Istvan (Agoda)
Hi Peter, Yeah, went through all, also set the mds_memory_limit. Collocated with mgr/mon so created 3 mds. Have enough cpu, it is on ssd. So yeah, even went inside all the cephfs menu points to take the information. Istvan Szabo Senior Infrastructure Engineer

[ceph-users] Re: CephFS design

2021-06-11 Thread Szabo, Istvan (Agoda)
Couple of team want to use cephfs with k8s so the main use case would be k8s users. Istvan Szabo Senior Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com ---

[ceph-users] Re: Error on Ceph Dashboard

2021-06-11 Thread Ernesto Puerta
Thanks for the info, Robert. Glad to hear it's working now. Regarding the Ceph Tracker website, I just checked and the sign-up page seems to be working fine (https://tracker.ceph.com/account/register) in case you still want to get an account there. Kind Regards, Ernesto On Fri, Jun 11, 2021 at

[ceph-users] In theory - would 'cephfs root' out-perform 'rbd root'?

2021-06-11 Thread Harry G. Coin
On any given a properly sized ceph setup, for other than database end use) theoretically shouldn't a ceph-fs root out-perform any fs atop a rados block device root? Seems to me like it ought to: moving only the 'interesting' bits of files over the so-called 'public' network should take fewer, smal

[ceph-users] Re: driver name rbd.csi.ceph.com not found in the list of registered CSI drivers ?

2021-06-11 Thread Ralph Soika
Hi, I found the reason for the problem. I did not assign the DaemonSet to the correct namespace. I am running all components in the namespace 'ceph-system'. After I fixed my DaemonSet configuration the plugin pods are running on my worker nodes and the error message is gone. --- kind: Daemo

[ceph-users] Re: CephFS design

2021-06-11 Thread Anthony D'Atri
>> Can you suggest me what is a good cephfs design? One that uses copious complements of my employer’s components, naturally ;) >> I've never used it, only >> rgw and rbd we have, but want to give a try. Howvere in the mail list I saw >> a huge amount of issues with cephfs Something to remembe

[ceph-users] Re: CephFS design

2021-06-11 Thread Peter Sarossy
hey Istvan, The Hardware Recommendations page actually has a ton of info on the questions you are asking, did you go though that one yet? Without massive overkill, I don't think there's a "bulletproof" design, as the actual I/O use

[ceph-users] Re: Ceph Month June Schedule Now Available

2021-06-11 Thread Mike Perez
Hi everyone, In ten minutes, join us for the next Ceph Month presentation on Intel QLC SSD: Cost-Effective Ceph Deployments by Anthony D'Atri https://bluejeans.com/908675367 https://pad.ceph.com/p/ceph-month-june-2021 On Fri, Jun 11, 2021 at 5:50 AM Mike Perez wrote: > > Hi everyone, > > In ten

[ceph-users] CephFS design

2021-06-11 Thread Szabo, Istvan (Agoda)
Hi, Can you suggest me what is a good cephfs design? I've never used it, only rgw and rbd we have, but want to give a try. Howvere in the mail list I saw a huge amount of issues with cephfs so would like to go with some let's say bulletproof best practices. Like separate the mds from mon and m

[ceph-users] Re: Ceph Month June Schedule Now Available

2021-06-11 Thread Mike Perez
Hi everyone, In ten minutes, join us for the next Ceph Month presentation on Performance Optimization for All Flash-based on aarch64 by Chunsong Feng https://pad.ceph.com/p/ceph-month-june-2021 https://bluejeans.com/908675367 On Thu, Jun 10, 2021 at 6:00 AM Mike Perez wrote: > > Hi everyone, >

[ceph-users] driver name rbd.csi.ceph.com not found in the list of registered CSI drivers ?

2021-06-11 Thread Ralph Soika
Hi, I try to connect my new ceph cluster (octopus) with my kubernetes system. Therefor I followed the setup guide form the official documentation: https://docs.ceph.com/en/octopus/rbd/rbd-kubernetes/ The csi-rbdplugin-provisioner is running successful on all my kubernetes worker nodes (as fa

[ceph-users] Re: slow ops at restarting OSDs (octopus)

2021-06-11 Thread Manuel Lausch
Okay, I poked around a bit more and found this document: https://docs.ceph.com/en/latest/dev/osd_internals/stale_read/ I don't understand exactly what it is all about and how it works, and what the intetion is behind it. But there is one config option mentiond: "osd_pool_default_read_lease_ratio"

[ceph-users] Re: suggestion for Ceph client network config

2021-06-11 Thread Ansgar Jazdzewski
Hi, I would do an extra network / VLAN mostly for security reasons, also take a look at CTDB for samba failover. Have a nice Weekend, Ansgar Am Fr., 11. Juni 2021 um 08:21 Uhr schrieb Götz Reinicke : > > Hi all > > We get a new samba smb fileserver who mounts our cephfs for exporting some > sha

[ceph-users] Re: CephFS design

2021-06-11 Thread Ansgar Jazdzewski
Hi, first of all, check the workload you like to have on the filesystem if you plan to migrate an old one do some proper performance-testing of the old storage. the io500 can give some ideas https://www.vi4io.org/io500/start but it depends on the use-case of the filesystem cheers, Ansgar Am Fr.

[ceph-users] Re: slow ops at restarting OSDs (octopus)

2021-06-11 Thread Peter Lieven
Am 11.06.21 um 11:48 schrieb Dan van der Ster: > On Fri, Jun 11, 2021 at 11:08 AM Peter Lieven wrote: >> Am 10.06.21 um 17:45 schrieb Manuel Lausch: >>> Hi Peter, >>> >>> your suggestion pointed me to the right spot. >>> I didn't know about the feature, that ceph will read from replica >>> PGs. >>

[ceph-users] Re: slow ops at restarting OSDs (octopus)

2021-06-11 Thread Dan van der Ster
On Fri, Jun 11, 2021 at 11:08 AM Peter Lieven wrote: > > Am 10.06.21 um 17:45 schrieb Manuel Lausch: > > Hi Peter, > > > > your suggestion pointed me to the right spot. > > I didn't know about the feature, that ceph will read from replica > > PGs. > > > > So on. I found two functions in the osd/Pr

[ceph-users] Re: lib remoto in ubuntu

2021-06-11 Thread Sebastian Wagner
Hi Alfredo, if you don't use cephadm, then I'd recommend to not install the ceph-mgr-cephadm package. If you use cephadm with an ubuntu based container, you'll have to make sure that the MGR properly finds the remoto package within the container. Thanks, Sebastian Am 11.06.21 um 05:24 sch

[ceph-users] Re: Ceph Ansible fails on check if monitor initial keyring already exists

2021-06-11 Thread Guillaume Abrioux
Hi Jared, Could you open a GitHub issue for this? Thanks! On Fri, 4 Jun 2021 at 00:09, Jared Jacob wrote: > I am running the Ceph ansible script to install ceph version Stable-6.0 > (Pacific). > > When running the sample yml file that was supplied by the github repo it > runs fine up until t

[ceph-users] Re: slow ops at restarting OSDs (octopus)

2021-06-11 Thread Peter Lieven
Am 10.06.21 um 17:45 schrieb Manuel Lausch: > Hi Peter, > > your suggestion pointed me to the right spot. > I didn't know about the feature, that ceph will read from replica > PGs. > > So on. I found two functions in the osd/PrimaryLogPG.cc: > "check_laggy" and "check_laggy_requeue". On both is fi