px-c-metadata 19 1.0 128
px-d-data 0 1.0 512
px-d-metadata 0 1.0 128
So the total number of pgs for all pools is currently 2592 which is
far from 22148 pgs?
Any ideas?
Thanks Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrass
0 128
So the total number of pgs for all pools is currently 2592 which is far
from 22148 pgs?
Any ideas?
Thanks Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
PGP: http://www.uni-
d in. I had expected that one OSD should be down now, but
it wasn't.
And even more strange the problems with slow ops from osd.77 are also
gone for the moment and the cluster is completely healthy again.
Thanks for your help
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Uni
ing in my setup? Is the "LIFE
EXPECTANCY" perhaps only populated if the local predictor predicts a
failure or should I find something like "good" there if the disk is ok
for the moment? Recently I even had a disk that died but I did not see
anything in ceph-device ls for the died
4c-81bd-f52a3309161f". Of cource this pv is actually no
longer existant because I pulled the old disk and inserted a new one.
Rainer
Am 25.04.22 um 18:26 schrieb Josh Baergen:
On Mon, Apr 25, 2022 at 10:22 AM Rainer Krienke wrote:
Hello,
Hi!
--> RuntimeError: The osd ID 49 is already
;3.64t 0
/dev/sdr ceph-7b28ddc2-8500-487b-a693-51d711d26d40 lvm2 a--
<3.64t 0
So how could I proceed? It seems somehow lvm has a dangling pv, which
was the old disk. How could I solve this issue?
Thanks
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetss
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
PGP: http://www.uni
Hello Dan, hello Stefan,
thank you both very much for the information you provided.
Have a nice day
Rainer
Am 24.09.21 um 16:44 schrieb Stefan Kooman:
On 9/24/21 08:33, Rainer Krienke wrote:
Hallo Dan,
I am also running a productive 14.2.22 Cluster with 144 HDD-OSDs and
I am thinking if I
s upgraded cleanly as expected.
* One minor surprise was that the mgrs respawned themselves moments
after the leader restarted into octopus:
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
PGP: http
Hello Janne,
thank you very much for answering my questions.
Rainer
Am 27.08.21 um 12:51 schrieb Janne Johansson:
Den fre 27 aug. 2021 kl 12:43 skrev Rainer Krienke :
Hello,
recently I thought about erasure coding and how to set k+m in a useful
way also taking into account the number of
level). So data would not be lost but the
cluster might stop delivering data and would be unable to repair and
thus would also be unable to become healthy again?
Right or wrong?
Thanks a lot
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Web:
y to check if trimming works?
Thanks for hints
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +4926128
ist not the only way to see these symptims.
Is it possible to dump these defaults (so you can dump them before and after a
upgrade and compare/archive them)?
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke,
, since PG removal is the only workload of this type in a
typical RGW system.
Josh
On Fri, Jul 16, 2021 at 6:58 AM Rainer Krienke <mailto:krie...@uni-koblenz.de>> wrote:
Hello,
Today I upgraded a ceph (HDD) cluster consisting of 9 hosts with
each 16
OSDs (a total of 144)
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287
1001312
___
ceph-users
14.2.21.tar.gz
* For packages, see https://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 5ef401921d7a88aea18ec7558f7f9374ebd8f5a6
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.i
Could you set debug_mgr = 4/5 then check the mgr log for something like this?
mgr[progress] Updated progress to -0. (Rebalancing
after osd... marked in)
Cheers, Dan
On Tue, May 4, 2021 at 4:10 PM Rainer Krienke wrote:
Hello,
I am playing around with a test ceph 14.2.20 clus
on 20" before starting the experiment
this bug does not show up. I also tried the very same procedure on the
same cluster updated to 15.2.11 but I was unable to reproduce this bug
in this ceph version.
Thanks
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrass
ceph health detail for this
situation.
Thanks very much to everyone who answered my request to help out.
What is left now is to replace the disks and then bring the two osds up
again.
Have a nice day
Rainer
Am 30.03.21 um 13:32 schrieb Burkhard Linke:
Hi,
On 30.03.21 13:05, Rainer Krienke
n
you check if this option is present and set to true? If it is not working as
intended, a tracker ticker might be in order.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
5607
don't forget to
increase min_size back to 5 when the recovery has finished, that's very
important!
Regards,
Eugen
Zitat von Rainer Krienke :
Hello,
i run a ceph Nautilus cluster with 9 hosts and 144 OSDs. Last night we
lost two disks, so two OSDs (67,90) are down. The two di
MiB/s, 12 objects/s
Thanks a lot
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax
re I could replace the broken disk?
Any comments on this?
Thanks
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +492612
__
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Tel: +49261287 1312 Fax +49261287 100 1312
Web: http:/
ing
cephfs reading the message I started asking myself if I should scrub
cephfs regularly or does ceph do this on its own or is scrubbing only
needed in case cephs has been damaged?
Does anyone know more about this?
Thanks for your help
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenze
://github.com/ceph/ceph/commit/0253205ef36acc6759a3a9687c5eb1b27aa901bf
>
> So at the moment your PGs are merging.
>
> If you want to stop that change, then set autoscale_mode to off or
> warn for the relevant pools, then set the pg_num back to the current
> (1024).
>
> -- Dan
>
856 31856
> # diff <(osdmaptool --print 31853) <(osdmaptool --print 31856)
>
> -- dan
>
>
>
> On Thu, Mar 5, 2020 at 10:05 AM Rainer Krienke wrote:
>>
>> Hello,
>>
>> before I ran the update to 14.2.8 I checked that the state was healthy
>&g
in` also before the upgrade?
> If an osd was out, then you upgraded/restarted and it went back in, it
> would trigger data movement.
> (I usually set noin before an upgrade).
>
> -- dan
>
> On Thu, Mar 5, 2020 at 9:46 AM Rainer Krienke wrote:
>>
>> I found some inform
122 (osd.122) 634 : cluster [DBG]
36.3f0s0 starting backfill to osd.25(3) from (0'0,0'0] MAX to 31854'314030
2020-03-05 07:24:41.595369 osd.126 (osd.126) 559 : cluster [DBG]
36.3e7s0 starting backfill to osd.7(3) from (0'0,0'0] MAX to 31854'275181
2020-03-05 07:24:41.59
+undersized+degraded+remapped+backfill_wait
5active+undersized+degraded+remapped+backfilling
io:
client: 33 MiB/s rd, 7.2 MiB/s wr, 91 op/s rd, 186 op/s wr
recovery: 83 MiB/s, 20 objects/s
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070
in "ceph auth caps ..."
call from above everything works.
Any idea how I can get this setup to work?
Thanks
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Tel: +49261287 1312 Fax +49261287 100 1312
Web: http://userpages.uni-koblenz.de/~k
ld go down at the same time?
Is my understanding correct?
Thank you very much for your help
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Tel: +49261287 1312 Fax +49261287 100 1312
Web: http://userpages.uni-koblenz.de/~krienke
PGP: http://use
32 matches
Mail list logo