Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Martin Buss
Sent: 14 December 2022 19:32
To: ceph-users@ceph.io
Subject: [ceph-users] Re: New pool created with 2048 pg_num not executed
will do, that will take another day or so.
Can this have to do
f tree
ceph status
anywhere. I thought you posted it, but well. Could you please post the output
of these commands?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
____
From: Martin Buss
Sent: 14 December 2022 22:02:43
To: Frank Sc
Bygning 109, rum S14
From: Martin Buss
Sent: 14 December 2022 19:32
To: ceph-users@ceph.io
Subject: [ceph-users] Re: New pool created with 2048 pg_num not executed
will do, that will take another day or so.
Can this have to do anything with
osd_pg_bits that
wrote:
Then I'd suggest to wait until the backfilling is done and then report
back if the PGs are still not created. I don't have information about
the ML admin, sorry.
Zitat von Martin Buss :
that cephfs_data has been autoscaling while filling, the mismatched
numbers are a resu
t_hash rjenkins pg_num 187 pgp_num 59 autoscale_mode off
Does disabling the autoscaler leave it like that when you disable it in
the middle of scaling? What is the current 'ceph status'?
Zitat von Martin Buss :
Hi Eugen,
thanks, sure, below:
pg_num stuck at 1152 and pgp_num stuck
Hi list admins, I accidentally posted my private address, can you please
delete that post?
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/JMFG73QMB3MJKHDMNPIKZHQOUUCJPJJN/
Thanks,
Martin
On 14.12.22 15:18, Eugen Block wrote:
Hi,
I haven't been dealing with ceph-volume too m
hashpspool
stripe_width 0 pg_num_min 2048
On 14.12.22 15:10, Eugen Block wrote:
Hi,
are there already existing pools in the cluster? Can you share your
'ceph osd df tree' as well as 'ceph osd pool ls detail'? It sounds like
ceph is trying to stay within the limit of mon_
hashpspool
stripe_width 0 pg_num_min 2048
On 14.12.22 15:10, Eugen Block wrote:
Hi,
are there already existing pools in the cluster? Can you share your
'ceph osd df tree' as well as 'ceph osd pool ls detail'? It sounds like
ceph is trying to stay within the limit of mon_
Hi,
on quincy, I created a new pool
ceph osd pool create cfs_data 2048 2048
6 hosts 71 osds
autoscaler is off; I find it kind of strange that the pool is created
with pg_num 1152 and pgp_num 1024, mentioning the 2048 as the new
target. I cannot manage to actually make this pool contain 2048