[ceph-users] Re: deep scrub and long backfilling

2023-03-05 Thread Alessandro Bolgia
Yes thanks
I used those commands.
Will see how is going.
Best regards

Il Dom 5 Mar 2023, 19:34 Wesley Dillingham  ha
scritto:

> In general it is safe and during long running remapping and backfill
> situations I enable it. You can enable it with:
>
>  "ceph config set osd osd_scrub_during_recovery true"
>
> If you have any problems you think are caused by the change, undo it:
>
> Stop scrubs asap:
> "ceph osd set nodeep-scrub"
> "ceph osd set noscrub"
>
> reinstate the previous value:
>  "ceph config set osd osd_scrub_during_recovery false"
>
> Once things stabilize unset the no scrub flags to resume normal scrub
> operations:
>
> "ceph osd unset nodeep-scrub"
> "ceph osd unset noscrub"
>
>
>
> Respectfully,
>
> *Wes Dillingham*
> w...@wesdillingham.com
> LinkedIn 
>
>
> On Sat, Mar 4, 2023 at 3:07 AM Janne Johansson 
> wrote:
>
>> Den lör 4 mars 2023 kl 08:08 skrev :
>> > ceph 16.2.11,
>> > is safe to enable scrub and deep scrub during backfilling ?
>> > I have log recovery-backfilling due to a new crushmap , backfilling is
>> going slow and deep scrub interval as expired so I have many pgs  not
>> deep-scrubbed in time.
>>
>> It is safe to have it enabled, scrubs will skip the PGs currently
>> being backfilled.
>> It will put some extra load on the cluster, but for most clusters,
>> scrubs are always on by default.
>>
>> --
>> May the most significant bit of your life be positive.
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: deep scrub and long backfilling

2023-03-05 Thread Wesley Dillingham
In general it is safe and during long running remapping and backfill
situations I enable it. You can enable it with:

 "ceph config set osd osd_scrub_during_recovery true"

If you have any problems you think are caused by the change, undo it:

Stop scrubs asap:
"ceph osd set nodeep-scrub"
"ceph osd set noscrub"

reinstate the previous value:
 "ceph config set osd osd_scrub_during_recovery false"

Once things stabilize unset the no scrub flags to resume normal scrub
operations:

"ceph osd unset nodeep-scrub"
"ceph osd unset noscrub"



Respectfully,

*Wes Dillingham*
w...@wesdillingham.com
LinkedIn 


On Sat, Mar 4, 2023 at 3:07 AM Janne Johansson  wrote:

> Den lör 4 mars 2023 kl 08:08 skrev :
> > ceph 16.2.11,
> > is safe to enable scrub and deep scrub during backfilling ?
> > I have log recovery-backfilling due to a new crushmap , backfilling is
> going slow and deep scrub interval as expired so I have many pgs  not
> deep-scrubbed in time.
>
> It is safe to have it enabled, scrubs will skip the PGs currently
> being backfilled.
> It will put some extra load on the cluster, but for most clusters,
> scrubs are always on by default.
>
> --
> May the most significant bit of your life be positive.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Issue upgrading 17.2.0 to 17.2.5

2023-03-05 Thread Eugen Block

Hi,

can you paste the exact command of your upgrade attempt? It looks like  
„stop“ is supposed to be the image name? An upgrade usually starts  
with the MGR, then MONs and then OSDs, does ‚ceph versions‘ reflect  
that some of the OSDs were upgraded successfully? Do you have logs  
from the failing OSDs? For example the cephadm.log on the host where  
an OSD upgrade failed and the active MGR at time could help figuring  
this out.

Also what’s the current ceph status? And also add ‚ceph orch upgrade status‘.

Regards
Eugen

Zitat von aella...@gmail.com:

I initially ran the upgrade fine but it failed @ around 40/100 on an  
osd, so after waiting for  along time i thought I'd try restarting  
it and then restarting the upgrade.
I am stuck with the below debug error, I have tested docker pull  
from other servers and they dont fail for the ceph images but on  
ceph it does. If i even try to redeploy or add or remove mon damons  
for example it comes up with the same error related to the images.


The error that ceph is giving me is:
2023-03-02T07:22:45.063976-0700 mgr.mgr-node.idvkbw [DBG]  
_run_cephadm : args = []
2023-03-02T07:22:45.070342-0700 mgr.mgr-node.idvkbw [DBG] args:  
--image stop --no-container-init pull
2023-03-02T07:22:45.081086-0700 mgr.mgr-node.idvkbw [DBG] Running  
command: which python3
2023-03-02T07:22:45.180052-0700 mgr.mgr-node.idvkbw [DBG] Running  
command: /usr/bin/python3  
/var/lib/ceph/5058e342-dac7-11ec-ada3-01065e90228d/cephadm.059bfc99f5cf36ed881f2494b104711faf4cbf5fc86a9594423cc105cafd9b4e --image stop --no-container-init  
pull

2023-03-02T07:22:46.500561-0700 mgr.mgr-node.idvkbw [DBG] code: 1
2023-03-02T07:22:46.500787-0700 mgr.mgr-node.idvkbw [DBG] err:  
Pulling container image stop...

Non-zero exit code 1 from /usr/bin/docker pull stop
/usr/bin/docker: stdout Using default tag: latest
/usr/bin/docker: stderr Error response from daemon: pull access  
denied for stop, repository does not exist or may require 'docker  
login': denied: requested access to the resource is denied

ERROR: Failed command: /usr/bin/docker pull stop
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io