Hello!
I want to increment the interval between deep scrub in all osd.
I tryed this but not configured:
ceph config set osd.* osd_deep_scrub_interval 1209600
I have 50 osd.. should config every osd?
thanks for the support!
___
ceph-users mailing list
Hello,
Thankyou.
I think the instructions are:
1. Mark osd failed with out
2. Waiting for rebalancing the data and wait to OK status
3. Mark as down
4. Delete osd
5. Replace device by new
6. Add new osd
Is correct?
___
ceph-users ma
Hello,
First, sorry for my english...
Since a few weeks, I receive every day notifies with HEALTH ERR in my ceph. The
notifies are related to inconssistent pgs and ever are on same osd.
I ran smartctl test to the disk osd assigned and the result is "passed".
Should replace the disk by other ne
"namespace": ""
},
"need": "49128'125646582",
"have": "0'0",
"flags": "none",
"clean_regions": "clean_offsets: [], clean_omap: 0, new_object
___
De: Stefan Kooman
Enviado: lunes, 26 de junio de 2023 11:27
Para: Jorge JP ; ceph-users@ceph.io
Asunto: Re: [ceph-users] Possible data damage: 1 pg recovery_unfound, 1 pg
inconsistent
On 6/26/23 08:38, Jorge JP wrote:
> Hello,
>
> After deep-scrub my cluster shown this error:
>
&
Hello,
After deep-scrub my cluster shown this error:
HEALTH_ERR 1/38578006 objects unfound (0.000%); 1 scrub errors; Possible data
damage: 1 pg recovery_unfound, 1 pg inconsistent; Degraded data redundancy:
2/77158878 objects degraded (0.000%), 1 pg degraded
[WRN] OBJECT_UNFOUND: 1/38578006 obj
Lacunza
Enviado: jueves, 19 de mayo de 2022 16:34
Para: ceph-users@ceph.io
Asunto: [ceph-users] Re: Best way to change disk in controller disk without
affect cluster
Hola Jorge,
El 19/5/22 a las 9:36, Jorge JP escribió:
> Hello Anthony,
>
> I need make this because can't add new SS
tly in the machine and osd are created as bluestore. Are
used for rbd. We used Proxmox for creating kvm machines.
A greeting.
De: Anthony D'Atri
Enviado: miércoles, 18 de mayo de 2022 19:17
Para: Jorge JP
Cc: ceph-users@ceph.io
Asunto: Re: [ceph-users] Re: Bes
Hello,
Have I check same global flag for this operation?
Thanks!
De: Stefan Kooman
Enviado: miércoles, 18 de mayo de 2022 14:13
Para: Jorge JP
Asunto: Re: [ceph-users] Best way to change disk in controller disk without
affect cluster
On 5/18/22 13:06, Jorge
Hello!
I have a cluster ceph with 6 nodes with 6 HDD disks in each one. The status of
my cluster is OK and the pool 45.25% (95.55 TB of 211.14 TB). I don't have any
problem.
I want change the position of a various disks in the disk controller of some
nodes and I don't know what is the way.
-
rebooted. But today the server not
relationated with ceph cluster. Only have public and private ips in same
network but ports not configured.
De: Marc
Enviado: miércoles, 13 de octubre de 2021 12:49
Para: Jorge JP ; ceph-users@ceph.io
Asunto: RE: Cluster down
>
&
Hello,
We currently have a ceph cluster in Proxmox, with 5 ceph nodes with the public
and private network correctly configured and without problems. The state of
ceph was optimal.
We had prepared a new server to add to the ceph cluster. We did the first step
of installing Proxmox with the same
.
De: Robert Sander
Enviado: lunes, 9 de agosto de 2021 13:40
Para: ceph-users@ceph.io
Asunto: [ceph-users] Re: Size of cluster
Hi,
Am 09.08.21 um 12:56 schrieb Jorge JP:
> 15 x 12TB = 180TB
> 8 x 18TB = 144TB
How are these distributed across your nodes and what is the failure
domai
Hello,
I have a ceph cluster with 5 nodes. I have 23 osds distributed in these one
with hdd class. The disk size are:
15 x 12TB = 180TB
8 x 18TB = 144TB
Result of execute "ceph df" command:
--- RAW STORAGE ---
CLASS SIZE AVAILUSED RAW USED %RAW USED
hdd295 TiB 163 TiB 131 T
Hello,
I have a ceph cluster with 5 nodes (1 hdd each node). I want to add 5 more
drives (hdd) to expand my cluster. What is the best strategy for this?
I will add each drive in each node but is a good strategy add one drive and
wait to rebalance the data to new osd for add new osd? or maybe..
15 matches
Mail list logo