Hi you all,
it is a couple of days I'm facing this problem.
Although I already destroyed the cluster a couple of times I
continuously get these error
I instruct ceph to place 3 daemons
ceph orch apply mgr 3
I don't know at a low level either, but it seems to just be the path
nfs-ganesha will present to the user. There is another argument to
`ceph nfs export create` which is just "path" rather than pseudo-path
that marks what actual path within the cephfs the export is mounted
on. It's opt
aults to "/" (so the export you made is
mounted at the root of the fs). I think that's the one that really
matters. The pseudo-path seems to just act like a user facing name for
the path.
On Wed, Apr 24, 2024 at 3:40 AM Roberto Maggi @ Debian
wrote:
Hi you all,
I'm almost new
Hi you all,
I'm almost new to ceph and I'm understanding, day by day, why the
official support is so expansive :)
I setting up a ceph nfs network cluster whose recipe can be found here
below.
###
--> cluster creation cephadm bootstrap --mon-ip 10.20.20.81
Thanks for considerations.
On 4/3/24 13:08, Janne Johansson wrote:
Hi every one,
I'm new to ceph and I'm still studying it.
In my company we decided to test ceph for possible further implementations.
Although I undestood its capabilities I'm still doubtful about how to
setup replication.
Hi every one,
I'm new to ceph and I'm still studying it.
In my company we decided to test ceph for possible further implementations.
Although I undestood its capabilities I'm still doubtful about how to
setup replication.
Once implemented in production I can accept a little lacking of
Hi,
after a massive rebalance(tunables) my small SSD-OSDs are getting full,
i changed my crush rules so there are actual no pgs/pools on it, but the
disks stay full:
ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus
(stable)
ID CLASS WEIGHT REWEIGHT SIZE
osd?
It's not really clear what you're doing without the necessary context.
You can just add the 'ceph daemon osd.{OSD} perf dump' output here or
in some pastebin.
Zitat von Debian :
Hi,
the block.db size ist default and not custom configured:
current:
bluefs.db_used_bytes: 9602859008
bluefs.db_
group on them" in this mailing list. Maybe you
encounter the same problem as me.
Michal
On 11/20/23 08:56, Debian wrote:
Hi,
the block.db size ist default and not custom configured:
current:
bluefs.db_used_bytes: 9602859008
bluefs.db_used_bytes: 469434368
ceph daemon osd.149 c
t;: "1048576",
"bluefs_allocator": "hybrid",
"bluefs_buffered_io": "false",
"bluefs_check_for_zeros": "false",
"bluefs_compact_log_sync": "false",
"bluefs_log_compact_min_ratio": "5.
the crush rule for that pool. You can
paste the outputs here.
Zitat von Debian :
Hi,
after a massive rebalance(tunables) my small SSD-OSDs are getting
full, i changed my crush rules so there are actual no pgs/pools on
it, but the disks stay full:
ceph version 14.2.21
Hi,
after a massive rebalance(tunables) my small SSD-OSDs are getting full,
i changed my crush rules so there are actual no pgs/pools on it, but the
disks stay full:
ceph version 14.2.21 (5ef401921d7a88aea18ec7558f7f9374ebd8f5a6) nautilus
(stable)
ID CLASS WEIGHT REWEIGHT SIZE
12 matches
Mail list logo