--
From "Jan Pekař - Imatic"
To m...@tuxis.nl; ceph-users@ceph.io
Date 2/25/2023 4:14:54 PM
Subject Re: [ceph-users] OSD upgrade problem nautilus->octopus - snap_mapper
upgrade stuck
Hi,
I tried upgrade to Pacific now. The same result. OSD is not starting, stuck at
1500 keys.
JP
Hi,
I tried upgrade to Pacific now. The same result. OSD is not starting, stuck at
1500 keys.
JP
On 23/02/2023 00.16, Jan Pekař - Imatic wrote:
Hi,
I enabled debug and the same - 1500 keys is where it ends.. I also enabled
debug_filestore and ...
2023-02-23T00:02:34.876+0100 7f8ef26d1700
...@tuxis.nl / +31 318 200208
-- Original Message --
From "Jan Pekař - Imatic"
To ceph-users@ceph.io
Date 1/12/2023 5:53:02 PM
Subject [ceph-users] OSD upgrade problem nautilus->octopus - snap_mapper
upgrade stuck
Hi all,
I have problem upgrading nautilus to octopus on my
Hi all,
I have problem upgrading nautilus to octopus on my OSD.
Upgrade mon and mgr was OK and first OSD stuck on
2023-01-12T09:25:54.122+0100 7f49ff3eae00 1 osd.0 126556 init upgrade
snap_mapper (first start as octopus)
and there were no activity after that for more than 48 hours. No disk
Hi all,
we have problem on our production cluster running nautilus (14.2.22).
Cluster is almost full and few month ago we noticed issues with slow peering - when we restart any osd (or host) it takes hours to finish
peering process, instead of minutes.
We noticed, that some pool contains 90k
Hi all,
I would like to "pair" MonSession with TCP connection to get real process, which is using that session. I need it to identify processes with
old ceph features.
MonSession looks like
MonSession(client.84324148 [..IP...]:0/3096235764 is open allow *, features
0x27018fb86aa42ada
Hi Ben,
we are not using EC pool on that cluster.
OSD out behavior almost stopped when we solved memory issues (less memory
allocated to OSD's).
Now we are not working on that cluster anymore so we have no other info about
that problem.
Jan
On 20/07/2020 07.59, Benoît Knecht wrote:
Hi Jan,
AM, Jan Pekař - Imatic wrote:
Each node has 64GB RAM so it should be enough (12 OSD's = 48GB used).
On 21/03/2020 13.14, XuYun wrote:
Bluestore requires more than 4G memory per OSD, do you have enough memory?
2020年3月21日 下午8:09,Jan Pekař - Imatic 写道:
Hello,
I have ceph cluster v
Each node has 64GB RAM so it should be enough (12 OSD's = 48GB used).
On 21/03/2020 13.14, XuYun wrote:
Bluestore requires more than 4G memory per OSD, do you have enough memory?
2020年3月21日 下午8:09,Jan Pekař - Imatic 写道:
Hello,
I have ceph cluster version 14.2.7
Hello,
I have ceph cluster version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8)
nautilus (stable)
4 nodes - each node 11 HDD, 1 SSD, 10Gbit network
Cluster was empty, fresh install. We filled cluster with data (small blocks)
using RGW.
Cluster is now used for testing so no client was
eue_op
0x561362c80bc0 finish
On 03/02/2020 15.48, Jan Pekař - Imatic wrote:
Hi all,
I have small cluster and yesterday I tried to mount older RBD snapshot torecover data. (I have approx. 230 daily snapshots of one RBD
image on my small ceph).
After I did mount and ls operation, cluster was stuck
Hi all,
I have small cluster and yesterday I tried to mount older RBD snapshot torecover data. (I have approx. 230 daily snapshots of one RBD image
on my small ceph).
After I did mount and ls operation, cluster was stuck and I notice that 2of my OSD's eaten CPU and raise in memory usage (more
12 matches
Mail list logo