[ceph-users] service:mgr [ERROR] "Failed to apply:

2024-05-02 Thread Roberto Maggi @ Debian
Hi you all, it is a couple of days I'm facing this problem. Although I already destroyed the cluster a couple of times I continuously get these error I instruct ceph to place 3 daemons ceph orch apply mgr 3

[ceph-users] Re: ceph recipe for nfs exports

2024-04-29 Thread Roberto Maggi @ Debian
I don't know at a low level either, but it seems to just be the path nfs-ganesha will present to the user. There is another argument to `ceph nfs export create` which is just "path" rather than pseudo-path that marks what actual path within the cephfs the export is mounted on. It's opt

[ceph-users] Re: ceph recipe for nfs exports

2024-04-25 Thread Roberto Maggi @ Debian
aults to "/" (so the export you made is mounted at the root of the fs). I think that's the one that really matters. The pseudo-path seems to just act like a user facing name for the path. On Wed, Apr 24, 2024 at 3:40 AM Roberto Maggi @ Debian wrote: Hi you all, I'm almost new

[ceph-users] ceph recipe for nfs exports

2024-04-24 Thread Roberto Maggi @ Debian
Hi you all, I'm almost new to ceph and I'm understanding, day by day, why the official support is so expansive :) I setting up a ceph nfs network cluster whose recipe can be found here below. ### --> cluster creation cephadm bootstrap --mon-ip 10.20.20.81

[ceph-users] Re: ceph and raid 1 replication

2024-04-03 Thread Roberto Maggi @ Debian
Thanks for considerations. On 4/3/24 13:08, Janne Johansson wrote: Hi every one, I'm new to ceph and I'm still studying it. In my company we decided to test ceph for possible further implementations. Although I undestood its capabilities I'm still doubtful about how to setup replication.

[ceph-users] ceph and raid 1 replication

2024-04-03 Thread Roberto Maggi @ Debian
Hi every one, I'm new to ceph and I'm still studying it. In my company we decided to test ceph for possible further implementations. Although I  undestood its capabilities I'm still doubtful about how to setup replication. Once implemented in production I can accept a little lacking of

[ceph-users] blustore osd nearfull but no pgs on it

2023-11-27 Thread Debian
Hi, after a massive rebalance(tunables) my small SSD-OSDs are getting full, i changed my crush rules so there are actual no pgs/pools on it, but the disks stay full: ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable) ID CLASS WEIGHT REWEIGHT SIZE   

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Debian
osd? It's not really clear what you're doing without the necessary context. You can just add the 'ceph daemon osd.{OSD} perf dump' output here or in some pastebin. Zitat von Debian : Hi, the block.db size ist default and not custom configured: current: bluefs.db_used_bytes: 9602859008 bluefs.db_

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-20 Thread Debian
group on them" in this mailing list. Maybe you encounter the same problem as me. Michal On 11/20/23 08:56, Debian wrote: Hi, the block.db size ist default and not custom configured: current: bluefs.db_used_bytes: 9602859008 bluefs.db_used_bytes: 469434368 ceph daemon osd.149 c

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-19 Thread Debian
t;: "1048576",     "bluefs_allocator": "hybrid",     "bluefs_buffered_io": "false",     "bluefs_check_for_zeros": "false",     "bluefs_compact_log_sync": "false",     "bluefs_log_compact_min_ratio": "5.

[ceph-users] Re: blustore osd nearfull but no pgs on it

2023-11-17 Thread Debian
the crush rule for that pool. You can paste the outputs here. Zitat von Debian : Hi, after a massive rebalance(tunables) my small SSD-OSDs are getting full, i changed my crush rules so there are actual no pgs/pools on it, but the disks stay full: ceph version 14.2.21

[ceph-users] blustore osd nearfull but no pgs on it

2023-11-17 Thread Debian
Hi, after a massive rebalance(tunables) my small SSD-OSDs are getting full, i changed my crush rules so there are actual no pgs/pools on it, but the disks stay full: ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus (stable) ID CLASS WEIGHT REWEIGHT SIZE