Hallo Dan,
I am using Nautilus with a slightly outdated version 14.2.16, and I
don't remember me playing with upmaps in the past.
Following your suggestion, I removed a bunch of upmaps (the "longer"
lines) and after a while I verified that all PGs are properly mapped.
Thanks!
Hi Fulvio,
I suggest removing only the upmaps which are clearly incorrect, and
then see if the upmap balancer re-creates them.
Perhaps they were created when they were not incorrect, when you had a
different crush rule?
Or perhaps you're running an old version of ceph which had buggy
balancer
Hallo Dan, Nathan, thanks for your replies and apologies for my silence.
Sorry I had made a typo... the rule is really 6+4. And to reply to
Nathan's message, the rule was built like this in anticipation of
getting additional servers, at which point in time I will relax the "2
chunks per
Hold on: 8+4 needs 12 osds but you only show 10 there. Shouldn't you choose
6 type host and then chooseleaf 2 type osd?
.. Dan
On Thu, May 20, 2021, 1:30 PM Fulvio Galeazzi
wrote:
> Hallo Dan, Bryan,
> I have a rule similar to yours, for an 8+4 pool, with only
> difference that I
The obvious thing to do is to set 4+2 instead - is that not an option?
On Wed, May 12, 2021 at 11:58 AM Bryan Stillwell wrote:
>
> I'm trying to figure out a CRUSH rule that will spread data out across my
> cluster as much as possible, but not more than 2 chunks per host.
>
> If I use the
Hi Fulvio,
That's strange... It doesn't seem right to me.
Are there any upmaps for that PG?
ceph osd dump | grep upmap | grep 116.453
Cheers, Dan
On Thu, May 20, 2021, 1:30 PM Fulvio Galeazzi
wrote:
> Hallo Dan, Bryan,
> I have a rule similar to yours, for an 8+4 pool, with only
Hallo Dan, Bryan,
I have a rule similar to yours, for an 8+4 pool, with only
difference that I replaced the second "choose" with "chooseleaf", which
I understand should make no difference:
rule default.rgw.buckets.data {
id 6
type erasure
min_size 3
Hi Bryan,
I had to do something similar, and never found a rule to place "up to"
2 chunks per host, so I stayed with the placement of *exactly* 2
chunks per host.
But I did this slightly differently to what you wrote earlier: my rule
chooses exactly 4 hosts, then chooses exactly 2 osds on each:
Actually both our solutions don't work very well. Frequently the same OSD was
chosen for multiple chunks:
8.72 9751 0 00 408955125760
0 1302 active+clean 2h 224790'12801 225410:49810
[13,1,14,11,18,2,19,13]p13
This works better than my solution. It allows the cluster to put more PGs on
the systems with more space on them:
# for pg in $(ceph pg ls-by-pool cephfs_data_ec62 -f json | jq -r
'.pg_stats[].pgid'); do
> echo $pg
> for osd in $(ceph pg map $pg -f json | jq -r '.up[]'); do
> ceph osd
Would something like this work?
step take default
step choose indep 4 type host
step chooseleaf indep 1 type osd
step emit
step take default
step choose indep 0 type host
step chooseleaf indep 1 type osd
step emit
J.
‐‐‐ Original Message ‐‐‐
On Wednesday, May 12th, 2021 at 17:58, Bryan
I was able to figure out the solution with this rule:
step take default
step choose indep 0 type host
step chooseleaf indep 1 type osd
step emit
step take default
step choose indep 0 type host
step chooseleaf indep 1 type osd
step
12 matches
Mail list logo