à 08:12, Sridhar Seshasayee a
écrit :
> Hello Joffrey,
> You could be hitting the slow backfill/recovery issue with
> mclock_scheduler.
> To confirm the above could you please provide the output of the following
> commands?
>
> 1. ceph versions
> 2. ceph config show o
rgw.buckets.data, with 32 pg. I would definitely turn off
> autoscale and increase pg_num/pgp_num. Someone with more experience than I
> can chime in, but I would think something like 2048 would be much better.
>
> On Thu, Mar 2, 2023 at 6:12 PM Joffrey wrote:
>
>> root@hbgt-c
df
> ceph osd dump | grep pool return?
>
> Are you using auto scaling? 289pg with 272tb of data and 60 osds, that
> seems like 3-4 pg per osd at almost 1TB each. Unless I'm thinking of this
> wrong.
>
> On Thu, Mar 2, 2023, 17:37 Joffrey wrote:
>
>> My Ceph Version
>
> That it’s remapped makes me think that what you’re seeing is the balancer
> doing its job.
>
> As far as the scrubbing, do you limit the times when scrubbing can happen?
> Are these HDDs? EC?
>
> > On Mar 2, 2023, at 07:20, Joffrey wrote:
> >
> > Hi,
>
Hi,
I have many 'not {deep-}scrubbed in time' and a1 PG remapped+backfilling
and I don't understand why this backfilling is taking so long.
root@hbgt-ceph1-mon3:/# ceph -s
cluster:
id: c300532c-51fa-11ec-9a41-0050569c3b55
health: HEALTH_WARN
15 pgs not deep-scrubbed in t
Ok, but I have not really SSD. My SSD is only for DB, not for data.
Jof
Le mar. 5 juil. 2022 à 18:01, Tatjana Dehler a écrit :
> Hi,
>
> On 7/5/22 13:17, Joffrey wrote:
> > Hi,
> >
> > I upgraded from 16.2.4 to 17.2.0
> >
> > Now, I have a CephImb
al CRUSH rules, complex
> topology?
>
> > On Jul 5, 2022, at 4:17 AM, Joffrey wrote:
> >
> > Hi,
> >
> > I upgraded from 16.2.4 to 17.2.0
> >
> > Now, I have a CephImbalance alert with many errors on my OSD "deviates by
> > more than 30
Hi,
I upgraded from 16.2.4 to 17.2.0
Now, I have a CephImbalance alert with many errors on my OSD "deviates by
more than 30%".
What can I do ?
Does it come from the change on scale-up/scale-down configuration ?
Thanks you
Jof
___
ceph-users mailing
Ok, restart my OSD 0 fix the problem ! Thanks you
Le mar. 16 nov. 2021 à 13:32, Stefan Kooman a écrit :
> On 11/16/21 13:17, Joffrey wrote:
>
> > "peer_info": [
> > {
> > "peer": "0",
> >
log_size": 782,
"ondisk_log_size": 782,
"stats_invalid": false,
"dirty_stats_invalid": false,
"omap_stats_invalid": false,
"hitset_stats_invalid": false,
"hitset_bytes_stats_invalid&quo
Hi,
I don't understand why my Global Recovery Event never finish...
I have 3 hosts, all osd and hosts are up. My pools are replica*3
# ceph status
cluster:
id: 0a77af8a-414c-11ec-908a-005056b4f234
health: HEALTH_WARN
Reduced data availability: 1 pg inactive
D
11 matches
Mail list logo