[ceph-users] Re: Fstab entry for mounting specific ceph fs?

2022-09-23 Thread Sagittarius-A Black Hole
Ah, I found it: mds_namespace IS in this case the name of the filesystem Why not call it filesystem name instead of namespace, a term that is as far as I could find, not defined in Ceph. Thanks, Daniel On Fri, 23 Sept 2022 at 17:09, Sagittarius-A Black Hole wrote: > > Hi, > > thanks for the

[ceph-users] Re: Fstab entry for mounting specific ceph fs?

2022-09-23 Thread Sagittarius-A Black Hole
Hi, thanks for the suggestion of the namespace. I'm trying to find any documentation over it, how do you set a name space for a filesystem / pool? Thanks, Daniel On Fri, 23 Sept 2022 at 16:01, Wesley Dillingham wrote: > > Try adding mds_namespace option like so: > >

[ceph-users] Re: Fstab entry for mounting specific ceph fs?

2022-09-23 Thread Sagittarius-A Black Hole
This is what I tried, following the link: {name}@.{fs_name}=/ {mount}/{mountpoint} ceph [mon_addr={ipaddress},secret=secretkey|secretfile=/path/to/secretfile does not work, it reports: source mount path was not specified, unable to parse mount source:-22 why is mount and mountpoint specified

[ceph-users] Re: Fstab entry for mounting specific ceph fs?

2022-09-23 Thread Ramana Krisna Venkatesh Raja
On Fri, Sep 23, 2022 at 6:41 PM Sagittarius-A Black Hole wrote: > > Hi, > > The below fstab entry works, so that is a given. > But how do I specify which Ceph filesystem I want to mount in this fstab > format? > > 192.168.1.11,192.168.1.12,192.168.1.13:/ /media/ceph_fs/ > name=james_user,

[ceph-users] Re: Fstab entry for mounting specific ceph fs?

2022-09-23 Thread Wesley Dillingham
Try adding mds_namespace option like so: 192.168.1.11,192.168.1.12,192.168.1.13:/ /media/ceph_fs/ name=james_user,secretfile=/etc/ceph/secret.key,mds_namespace=myfs On Fri, Sep 23, 2022 at 6:41 PM Sagittarius-A Black Hole < nigrat...@gmail.com> wrote: > Hi, > > The below fstab entry works,

[ceph-users] Fstab entry for mounting specific ceph fs?

2022-09-23 Thread Sagittarius-A Black Hole
Hi, The below fstab entry works, so that is a given. But how do I specify which Ceph filesystem I want to mount in this fstab format? 192.168.1.11,192.168.1.12,192.168.1.13:/ /media/ceph_fs/ name=james_user, secretfile=/etc/ceph/secret.key I have tried different ways, but always get the

[ceph-users] Re: Freak issue every few weeks

2022-09-23 Thread J-P Methot
We just got a reply from Intel telling us that there's a new firmware coming out soon to fix an issue where S4510 and S4610 drives get IO timeouts that may lead to drive drops when under heavy load. This might very well be the source of our issue. On 9/23/22 11:12, Stefan Kooman wrote: On

[ceph-users] Re: Balancer Distribution Help

2022-09-23 Thread Wyll Ingersoll
Understood, that was a typo on my part. Definitely dont cancel-backfill after generating the moves from placementoptimizer. From: Josh Baergen Sent: Friday, September 23, 2022 11:31 AM To: Wyll Ingersoll Cc: Eugen Block ; ceph-users@ceph.io Subject: Re:

[ceph-users] Re: Balancer Distribution Help

2022-09-23 Thread Josh Baergen
Hey Wyll, > $ pgremapper cancel-backfill --yes # to stop all pending operations > $ placementoptimizer.py balance --max-pg-moves 100 | tee upmap-moves > $ bash upmap-moves > > Repeat the above 3 steps until balance is achieved, then re-enable the > balancer and unset the "no" flags set

[ceph-users] Re: Balancer Distribution Help

2022-09-23 Thread Stefan Kooman
On 9/23/22 17:05, Wyll Ingersoll wrote: When doing manual remapping/rebalancing with tools like pgremapper and placementoptimizer, what are the recommended settings for norebalance, norecover, nobackfill? Should the balancer module be disabled if we are manually issuing the pg remap commands

[ceph-users] Re: Freak issue every few weeks

2022-09-23 Thread Stefan Kooman
On 9/23/22 15:22, J-P Methot wrote: Thank you for your reply, discard is not enabled in our configuration as it is mainly the default conf. Are you suggesting to enable it? No. There is no consensus if enabling it is a good idea (depends on proper implementation among other things). From my

[ceph-users] Re: Balancer Distribution Help

2022-09-23 Thread Wyll Ingersoll
When doing manual remapping/rebalancing with tools like pgremapper and placementoptimizer, what are the recommended settings for norebalance, norecover, nobackfill? Should the balancer module be disabled if we are manually issuing the pg remap commands generated by those scripts so it doesn't

[ceph-users] Re: how to enable ceph fscache from kernel module

2022-09-23 Thread David Yang
I found in some articles on the net that in their ceph.ko it depends on the fscache module. root@client:~# lsmod | grep ceph ceph 376832 1 libceph 315392 1 ceph fscache 65536 1 ceph libcrc32c 16384 3xfs, raid456, libceph root@client:~# modinfo ceph filename:

[ceph-users] Re: Freak issue every few weeks

2022-09-23 Thread J-P Methot
Thank you for your reply, discard is not enabled in our configuration as it is mainly the default conf. Are you suggesting to enable it? On 9/22/22 14:20, Stefan Kooman wrote: Just guessing here: have you configured "discard": bdev enable discard bdev async discard We've see monitor slow

[ceph-users] Re: Question about recovery priority

2022-09-23 Thread Josh Baergen
Hi Fulvio, > leads to a much shorter and less detailed page, and I assumed Nautilus > was far behind Quincy in managing this... The only major change I'm aware of between Nautilus and Quincy is that in Quincy the mClock scheduler is able to automatically tune up/down backfill parameters to

[ceph-users] Changing daemon config at runtime: tell, injectargs, config set and their differences

2022-09-23 Thread Oliver Schmidt
Hi everyone, while evaluating different config options at our Ceph cluster, I discovered that there are multiple ways to apply (ephemeral) config changes to specific running daemons. But even after researching docs and manpages, and doing some experiments, I fail to understand when to use

[ceph-users] Why OSD could report spurious read errors.

2022-09-23 Thread Igor Fedotov
Hello All! just to bring this knowledge to a wider audience... Under some circumstances osds/clusters might report (and even suffer from) spurious disk read errors. The following comment's re-post sheds light on the root cause. Many thanks to Canonical's folks for that. Originally posted

[ceph-users] Re: Any disadvantage to go above the 100pg/osd or 4osd/disk?

2022-09-23 Thread Eugen Block
Well, if that issue occurs it will be at the beginning of the recovery, so you may not notice it until you get inactive PGs. We hit that limit when we rebuilt all OSDs on one server with many EC chunks. Setting osd_max_pg_per_osd_hard_ratio to 5 (default 3) helped avoid inactive PGs for

[ceph-users] Re: Question about recovery priority

2022-09-23 Thread Fulvio Galeazzi
Hallo Josh thanks for your feedback! On 9/22/22 14:44, Josh Baergen wrote: Hi Fulvio, https://docs.ceph.com/en/quincy/dev/osd_internals/backfill_reservation/ describes the prioritization and reservation mechanism used for recovery and backfill. AIUI, unless a PG is below min_size, all

[ceph-users] Re: Any disadvantage to go above the 100pg/osd or 4osd/disk?

2022-09-23 Thread Szabo, Istvan (Agoda)
Good to know thank you, so in that case during recovery it worth to increase those values right? Istvan Szabo Senior Infrastructure Engineer --- Agoda Services Co., Ltd. e: istvan.sz...@agoda.com ---

[ceph-users] Re: Balancer Distribution Help

2022-09-23 Thread Eugen Block
+1 for increasing PG numbers, those are quite low. Zitat von Bailey Allison : Hi Reed, Just taking a quick glance at the Pastebin provided I have to say your cluster balance is already pretty damn good all things considered. We've seen the upmap balancer at it's best in practice

[ceph-users] Re: Any disadvantage to go above the 100pg/osd or 4osd/disk?

2022-09-23 Thread Eugen Block
Hi, I can't speak from the developers perspective, but we discussed this just recently intenally and with a customer. We doubled the number of PGs on one of our customer's data pools from around 100 to 200 PGs/OSD (HDDs with rocksDB on SSDs). We're still waiting for the final conclusion

[ceph-users] Re: Balancer Distribution Help

2022-09-23 Thread Stefan Kooman
On 9/22/22 21:48, Reed Dier wrote: Any tips or help would be greatly appreciated. Try JJ's Ceph balancer [1]. In our case it turned out to be *way* more efficient than built-in balancer (faster conversion, less movements involved). And able to achieve a very good PG distribution and