[ceph-users] blustore osd nearfull but no pgs on it

2023-11-17 Thread Debian
Hi, after a massive rebalance(tunables) my small SSD-OSDs are getting full, i changed my crush rules so there are actual no pgs/pools on it, but the disks stay full: ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable) ID CLASS WEIGHT REWEIGHT SIZE    R

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-17 Thread Debian
the crush rule for that pool. You can paste the outputs here. Zitat von Debian : Hi, after a massive rebalance(tunables) my small SSD-OSDs are getting full, i changed my crush rules so there are actual no pgs/pools on it, but the disks stay full: ceph version 14.2.21

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-19 Thread Debian
t;: "1048576",     "bluefs_allocator": "hybrid",     "bluefs_buffered_io": "false",     "bluefs_check_for_zeros": "false",     "bluefs_compact_log_sync": "false",     "bluefs_log_compact_min_ratio": "5.

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Debian
up on them" in this mailing list. Maybe you encounter the same problem as me. Michal On 11/20/23 08:56, Debian wrote: Hi, the block.db size ist default and not custom configured: current: bluefs.db_used_bytes: 9602859008 bluefs.db_used_bytes: 469434368 ceph daemon osd.149 c

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Debian
? It's not really clear what you're doing without the necessary context. You can just add the 'ceph daemon osd.{OSD} perf dump' output here or in some pastebin. Zitat von Debian : Hi, the block.db size ist default and not custom configured: current: bluefs.db_used_bytes: 9

[ceph-users] blustore osd nearfull but no pgs on it

2023-11-27 Thread Debian
Hi, after a massive rebalance(tunables) my small SSD-OSDs are getting full, i changed my crush rules so there are actual no pgs/pools on it, but the disks stay full: ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable) ID CLASS WEIGHT REWEIGHT SIZE    R

[ceph-users] ceph and raid 1 replication

2024-04-03 Thread Roberto Maggi @ Debian
Hi every one, I'm new to ceph and I'm still studying it. In my company we decided to test ceph for possible further implementations. Although I  undestood its capabilities I'm still doubtful about how to setup replication. Once implemented in production I can accept a little lacking of perf

[ceph-users] Re: ceph and raid 1 replication

2024-04-03 Thread Roberto Maggi @ Debian
Thanks for considerations. On 4/3/24 13:08, Janne Johansson wrote: Hi every one, I'm new to ceph and I'm still studying it. In my company we decided to test ceph for possible further implementations. Although I undestood its capabilities I'm still doubtful about how to setup replication. Defa

[ceph-users] ceph recipe for nfs exports

2024-04-24 Thread Roberto Maggi @ Debian
Hi you all, I'm almost new to ceph and I'm understanding, day by day, why the official support is so expansive :) I setting up a ceph nfs network cluster whose recipe can be found here below. ### --> cluster creation cephadm bootstrap --mon-ip 10.20.20.81 --cluster-ne

[ceph-users] Re: ceph recipe for nfs exports

2024-04-25 Thread Roberto Maggi @ Debian
port is mounted on. It's optional and defaults to "/" (so the export you made is mounted at the root of the fs). I think that's the one that really matters. The pseudo-path seems to just act like a user facing name for the path. On Wed, Apr 24, 2024 at 3:40 AM Roberto Maggi @ Deb

[ceph-users] Re: ceph recipe for nfs exports

2024-04-29 Thread Roberto Maggi @ Debian
;t understand the concept of "pseudo path" I don't know at a low level either, but it seems to just be the path nfs-ganesha will present to the user. There is another argument to `ceph nfs export create` which is just "path" rather than pseudo-path that marks what actua

[ceph-users] service:mgr [ERROR] "Failed to apply:

2024-05-02 Thread Roberto Maggi @ Debian
Hi you all, it is a couple of days I'm facing this problem. Although I already destroyed the cluster a couple of times I continuously get these error I instruct ceph to place 3 daemons ceph orch apply mgr 3 --placement="cephstage01:10.20.20.81,cephstage02:10.20.20.82,cephstage03:10.20.20.83

[ceph-users] Re: Prefered distro for Ceph

2024-09-05 Thread Roberto Maggi @ Debian
Hi, I never tried anything else than debian. On 9/5/24 12:33 PM, Boris wrote: Didn't you already got the answer from the reddit thread? https://www.reddit.com/r/ceph/comments/1f88u6m/prefered_distro_for_ceph/ I always point here: https://docs.ceph.com/en/latest/start/os-recommendations/ a