[ceph-users] How to increment osd_deep_scrub_interval

2024-01-04 Thread Jorge JP
Hello!

I want to increment the interval between deep scrub in all osd.

I tryed this but not configured:

ceph config set osd.* osd_deep_scrub_interval 1209600

I have 50 osd.. should config every osd?

thanks for the support!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: pgs incossistent every day same osd

2023-09-26 Thread Jorge JP
Hello,

Thankyou.

I think the instructions are:


  1.  Mark osd failed with out
  2.  Waiting for rebalancing the data and wait to OK status
  3.  Mark as down
  4.  Delete osd
  5.  Replace device by new
  6.  Add new osd

Is correct?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] pgs incossistent every day same osd

2023-09-26 Thread Jorge JP
Hello,

First, sorry for my english...

Since a few weeks, I receive every day notifies with HEALTH ERR in my ceph. The 
notifies are related to inconssistent pgs and ever are on same osd.

I ran smartctl test to the disk osd assigned and the result is "passed".

Should replace the disk by other new?

Regards!

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent

2023-06-26 Thread Jorge JP
Hello Frank,

Thank you. I ran the next command: ceph pg 32.15c list_unfound

I located the object but I don't know how solve this problem.

{
"num_missing": 1,
"num_unfound": 1,
"objects": [
{
"oid": {
"oid": "rbd_data.aedf52e8a44410.021f",
"key": "",
"snapid": -2,
"hash": 358991196,
"max": 0,
"pool": 32,
"namespace": ""
},
"need": "49128'125646582",
"have": "0'0",
"flags": "none",
"clean_regions": "clean_offsets: [], clean_omap: 0, new_object: 1",
"locations": []
}
],
"more": false


Thank you.


De: Frank Schilder 
Enviado: lunes, 26 de junio de 2023 11:43
Para: Jorge JP ; Stefan Kooman ; 
ceph-users@ceph.io 
Asunto: Re: [ceph-users] Re: Possible data damage: 1 pg recovery_unfound, 1 pg 
inconsistent

I don't think pg repair will work. It looks like a 2(1) replicated pool where 
both OSDs seem to have accepted writes while the other was down and now the PG 
can't decide what is the true latest version.

Using size 2 min-size 1 comes with manual labor. As far as I can tell, you will 
need to figure out what files/objects are affected and either update the 
missing copy or delete the object manually.

Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14


From: Jorge JP 
Sent: Monday, June 26, 2023 11:34 AM
To: Stefan Kooman; ceph-users@ceph.io
Subject: [ceph-users] Re: Possible data damage: 1 pg recovery_unfound, 1 pg 
inconsistent

Hello Stefan,

I run this command yesterday but the status not changed. Other pgs with status 
"inconsistent" was repaired after a day, but in this case, not works.

instructing pg 32.15c on osd.49 to repair

Normally, the pg will changed to repair but not.


De: Stefan Kooman 
Enviado: lunes, 26 de junio de 2023 11:27
Para: Jorge JP ; ceph-users@ceph.io 
Asunto: Re: [ceph-users] Possible data damage: 1 pg recovery_unfound, 1 pg 
inconsistent

On 6/26/23 08:38, Jorge JP wrote:
> Hello,
>
> After deep-scrub my cluster shown this error:
>
> HEALTH_ERR 1/38578006 objects unfound (0.000%); 1 scrub errors; Possible data 
> damage: 1 pg recovery_unfound, 1 pg inconsistent; Degraded data redundancy: 
> 2/77158878 objects degraded (0.000%), 1 pg degraded
> [WRN] OBJECT_UNFOUND: 1/38578006 objects unfound (0.000%)
>  pg 32.15c has 1 unfound objects
> [ERR] OSD_SCRUB_ERRORS: 1 scrub errors
> [ERR] PG_DAMAGED: Possible data damage: 1 pg recovery_unfound, 1 pg 
> inconsistent
>  pg 32.15c is active+recovery_unfound+degraded+inconsistent, acting 
> [49,47], 1 unfound
> [WRN] PG_DEGRADED: Degraded data redundancy: 2/77158878 objects degraded 
> (0.000%), 1 pg degraded
>  pg 32.15c is active+recovery_unfound+degraded+inconsistent, acting 
> [49,47], 1 unfound
>
>
> I searching in internet how it solves, but I'm confusing..
>
> Anyone can help me?

Does "ceph pg repair 32.15c" work for you?

Gr. Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent

2023-06-26 Thread Jorge JP
Hello Stefan,

I run this command yesterday but the status not changed. Other pgs with status 
"inconsistent" was repaired after a day, but in this case, not works.

instructing pg 32.15c on osd.49 to repair

Normally, the pg will changed to repair but not.


De: Stefan Kooman 
Enviado: lunes, 26 de junio de 2023 11:27
Para: Jorge JP ; ceph-users@ceph.io 
Asunto: Re: [ceph-users] Possible data damage: 1 pg recovery_unfound, 1 pg 
inconsistent

On 6/26/23 08:38, Jorge JP wrote:
> Hello,
>
> After deep-scrub my cluster shown this error:
>
> HEALTH_ERR 1/38578006 objects unfound (0.000%); 1 scrub errors; Possible data 
> damage: 1 pg recovery_unfound, 1 pg inconsistent; Degraded data redundancy: 
> 2/77158878 objects degraded (0.000%), 1 pg degraded
> [WRN] OBJECT_UNFOUND: 1/38578006 objects unfound (0.000%)
>  pg 32.15c has 1 unfound objects
> [ERR] OSD_SCRUB_ERRORS: 1 scrub errors
> [ERR] PG_DAMAGED: Possible data damage: 1 pg recovery_unfound, 1 pg 
> inconsistent
>  pg 32.15c is active+recovery_unfound+degraded+inconsistent, acting 
> [49,47], 1 unfound
> [WRN] PG_DEGRADED: Degraded data redundancy: 2/77158878 objects degraded 
> (0.000%), 1 pg degraded
>  pg 32.15c is active+recovery_unfound+degraded+inconsistent, acting 
> [49,47], 1 unfound
>
>
> I searching in internet how it solves, but I'm confusing..
>
> Anyone can help me?

Does "ceph pg repair 32.15c" work for you?

Gr. Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent

2023-06-26 Thread Jorge JP
Hello,

After deep-scrub my cluster shown this error:

HEALTH_ERR 1/38578006 objects unfound (0.000%); 1 scrub errors; Possible data 
damage: 1 pg recovery_unfound, 1 pg inconsistent; Degraded data redundancy: 
2/77158878 objects degraded (0.000%), 1 pg degraded
[WRN] OBJECT_UNFOUND: 1/38578006 objects unfound (0.000%)
pg 32.15c has 1 unfound objects
[ERR] OSD_SCRUB_ERRORS: 1 scrub errors
[ERR] PG_DAMAGED: Possible data damage: 1 pg recovery_unfound, 1 pg inconsistent
pg 32.15c is active+recovery_unfound+degraded+inconsistent, acting [49,47], 
1 unfound
[WRN] PG_DEGRADED: Degraded data redundancy: 2/77158878 objects degraded 
(0.000%), 1 pg degraded
pg 32.15c is active+recovery_unfound+degraded+inconsistent, acting [49,47], 
1 unfound


I searching in internet how it solves, but I'm confusing..

Anyone can help me?

Thank you! (Sorry for my english)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Best way to change disk in controller disk without affect cluster

2022-05-19 Thread Jorge JP
Hello,

Simply the model of disk not is detected by controller disk. I tested in other 
nodes and this model not detected. So I need change the position of SATA disk 
and have free slot to ssd. Not is problem of passthrough or config. Thanks!

Cheers

De: Eneko Lacunza 
Enviado: jueves, 19 de mayo de 2022 16:34
Para: ceph-users@ceph.io 
Asunto: [ceph-users] Re: Best way to change disk in controller disk without 
affect cluster

Hola Jorge,

El 19/5/22 a las 9:36, Jorge JP escribió:
> Hello Anthony,
>
> I need make this because can't add new SSD disks in the node but these are 
> not detected by the disk controller. We have two disk controller for can have 
> 12 disks.
>
> My idea is change one drive and test it. If doesn't work only lost 1 drive.
>
> Ceph are installe directly in the machine and osd are created as bluestore. 
> Are used for rbd. We used Proxmox for creating kvm machines.

So one of the controllers does not detect SSD disks, but the other does?

This might be a config problem, you may need to mark disk as passthrough
in controller BIOS/CLI utility. Some controllers can't do this; if
that's the case you may have to create a RAID0 on that SSD disk.

What controller(s) model(s)?

Cheers

>
> A greeting.
> 
> De: Anthony D'Atri
> Enviado: miércoles, 18 de mayo de 2022 19:17
> Para: Jorge JP
> Cc:ceph-users@ceph.io  
> Asunto: Re: [ceph-users] Re: Best way to change disk in controller disk 
> without affect cluster
>
>
> First question:  why do you want to do this?
>
> There are some deployment scenarios in which moving the drives will Just 
> Work, and others in which it won’t.  If you try, I suggest shutting the 
> system down all the way, exchanging just two drives, then powering back on — 
> and see if all is well before doing all.
>
> On which Ceph release were these OSDs deployed? Containerized? Are you using 
> ceph-disk or ceph-volume? LVM?  Colocated journal/DB/WAL, or on a seperate 
> device?
>
> Try `ls -l /var/lib/ceph/someosd` or whatever you have, look for symlinks 
> that reference device paths that may be stale if drives are swapped.
>
>> Hello,
>>
>> Have I check same global flag for this operation?
>>
>> Thanks!
>> 
>> De: Stefan Kooman
>> Enviado: miércoles, 18 de mayo de 2022 14:13
>> Para: Jorge JP
>> Asunto: Re: [ceph-users] Best way to change disk in controller disk without 
>> affect cluster
>>
>> On 5/18/22 13:06, Jorge JP wrote:
>>> Hello!
>>>
>>> I have a cluster ceph with 6 nodes with 6 HDD disks in each one. The status 
>>> of my cluster is OK and the pool 45.25% (95.55 TB of 211.14 TB). I don't 
>>> have any problem.
>>>
>>> I want change the position of a various disks in the disk controller of 
>>> some nodes and I don't know what is the way.
>>>
>>>   - Stop osd and move the disk of position (hotplug).
>>>
>>>   - Reweight osd to 0 and move the pgs to other osds, stop osd and change 
>>> position
>>>
>>> I think first option is ok, the data not deleted and when I will changed 
>>> the disk the server recognised again and I will can start osd without 
>>> problems.
>> Order of the disks should not matter. First option is fine.
>>
>> Gr. Stefan
>> ___
>> ceph-users mailing list --ceph-users@ceph.io
>> To unsubscribe send an email toceph-users-le...@ceph.io
> ___
> ceph-users mailing list --ceph-users@ceph.io
> To unsubscribe send an email toceph-users-le...@ceph.io

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Best way to change disk in controller disk without affect cluster

2022-05-19 Thread Jorge JP
Hello Anthony,

I need make this because can't add new SSD disks in the node but these are not 
detected by the disk controller. We have two disk controller for can have 12 
disks.

My idea is change one drive and test it. If doesn't work only lost 1 drive.

Ceph are installe directly in the machine and osd are created as bluestore. Are 
used for rbd. We used Proxmox for creating kvm machines.

A greeting.

De: Anthony D'Atri 
Enviado: miércoles, 18 de mayo de 2022 19:17
Para: Jorge JP 
Cc: ceph-users@ceph.io 
Asunto: Re: [ceph-users] Re: Best way to change disk in controller disk without 
affect cluster


First question:  why do you want to do this?

There are some deployment scenarios in which moving the drives will Just Work, 
and others in which it won’t.  If you try, I suggest shutting the system down 
all the way, exchanging just two drives, then powering back on — and see if all 
is well before doing all.

On which Ceph release were these OSDs deployed? Containerized? Are you using 
ceph-disk or ceph-volume? LVM?  Colocated journal/DB/WAL, or on a seperate 
device?

Try `ls -l /var/lib/ceph/someosd` or whatever you have, look for symlinks that 
reference device paths that may be stale if drives are swapped.

>
> Hello,
>
> Have I check same global flag for this operation?
>
> Thanks!
> 
> De: Stefan Kooman 
> Enviado: miércoles, 18 de mayo de 2022 14:13
> Para: Jorge JP 
> Asunto: Re: [ceph-users] Best way to change disk in controller disk without 
> affect cluster
>
> On 5/18/22 13:06, Jorge JP wrote:
>> Hello!
>>
>> I have a cluster ceph with 6 nodes with 6 HDD disks in each one. The status 
>> of my cluster is OK and the pool 45.25% (95.55 TB of 211.14 TB). I don't 
>> have any problem.
>>
>> I want change the position of a various disks in the disk controller of some 
>> nodes and I don't know what is the way.
>>
>>  - Stop osd and move the disk of position (hotplug).
>>
>>  - Reweight osd to 0 and move the pgs to other osds, stop osd and change 
>> position
>>
>> I think first option is ok, the data not deleted and when I will changed the 
>> disk the server recognised again and I will can start osd without problems.
>
> Order of the disks should not matter. First option is fine.
>
> Gr. Stefan
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Best way to change disk in controller disk without affect cluster

2022-05-18 Thread Jorge JP
Hello,

Have I check same global flag for this operation?

Thanks!

De: Stefan Kooman 
Enviado: miércoles, 18 de mayo de 2022 14:13
Para: Jorge JP 
Asunto: Re: [ceph-users] Best way to change disk in controller disk without 
affect cluster

On 5/18/22 13:06, Jorge JP wrote:
> Hello!
>
> I have a cluster ceph with 6 nodes with 6 HDD disks in each one. The status 
> of my cluster is OK and the pool 45.25% (95.55 TB of 211.14 TB). I don't have 
> any problem.
>
> I want change the position of a various disks in the disk controller of some 
> nodes and I don't know what is the way.
>
>   - Stop osd and move the disk of position (hotplug).
>
>   - Reweight osd to 0 and move the pgs to other osds, stop osd and change 
> position
>
> I think first option is ok, the data not deleted and when I will changed the 
> disk the server recognised again and I will can start osd without problems.

Order of the disks should not matter. First option is fine.

Gr. Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Best way to change disk in controller disk without affect cluster

2022-05-18 Thread Jorge JP
Hello!

I have a cluster ceph with 6 nodes with 6 HDD disks in each one. The status of 
my cluster is OK and the pool 45.25% (95.55 TB of 211.14 TB). I don't have any 
problem.

I want change the position of a various disks in the disk controller of some 
nodes and I don't know what is the way.

 - Stop osd and move the disk of position (hotplug).

 - Reweight osd to 0 and move the pgs to other osds, stop osd and change 
position

I think first option is ok, the data not deleted and when I will changed the 
disk the server recognised again and I will can start osd without problems.

What is the best option for you? Exist any problem?

Best regards!!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Cluster down

2021-10-13 Thread Jorge JP
Hello Marc,

For add node to ceph cluster with Proxmox first I have to install Proxmox hehe, 
this is not the problem.

File configuration is revised and correct. I understand your words but not is 
problem of configuration.

I can understand that cluster can have problems if any servers not configured 
correctly or ports in the switches not configured correctly. But this server 
never became in a member of cluster.

I extracted a part of logfile when ceph down.

A bit weeks ago, I have a problem with a port configuration and remove mtu 9216 
and various hypervisors of cluster proxmox rebooted. But today the server not 
relationated with ceph cluster. Only have public and private ips in same 
network but ports not configured.


De: Marc 
Enviado: miércoles, 13 de octubre de 2021 12:49
Para: Jorge JP ; ceph-users@ceph.io 
Asunto: RE: Cluster down

>
> We currently have a ceph cluster in Proxmox, with 5 ceph nodes with the
> public and private network correctly configured and without problems.
> The state of ceph was optimal.
>
> We had prepared a new server to add to the ceph cluster. We did the
> first step of installing Proxmox with the same version. I was at the
> point where I was setting up the network.

I am not using proxmox, just libvirt. But I would say the most important part 
is your ceph cluster. So before doing anything I would make sure to add the 
ceph node first and then install other things.

> For this step, I did was connect by SSH to the new server and copy the
> network configuration of one of the ceph nodes to this new one. Of
> course, changing the ip addresses.

I would not copy at all. Just change the files manually if you did not edit one 
file correctly or the server reboots before you change the ip addresses you can 
get into all kinds of problems.

> What happened when restarting the network service is that I lost access
> to the cluster. I couldn't access any of the 5 servers that are part of
> the  ceph cluster. Also, 2 of 3 hypervisors
> that we have in the proxmox cluster were restarted directly.

So now you know, you first have to configure networking, then ceph and then 
proxmox. Take your time adding a server. I guess the main reason you are in the 
current situation, you try to do it quick quick.

> Why has this happened if the new server is not yet inside the ceph
> cluster on the proxmox cluster and I don't even have the ports
> configured on my switch?

Without logs nobody is able to tell.

> Do you have any idea?
>
> I do not understand, if now I go and take any server and configure an IP
> of the cluster network and even if the ports are not even configured,
> will the cluster knock me down?

Nothing should happen if you install an OS and use ip addresses in the same 
space as your cluster/client network. Do this first.

> I recovered the cluster by phisically removing the cables from the new
> server.

So wipe it, and start over.

> Thanks a lot and sorry for my english...

No worries, your english is much better than my spanish ;)

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Cluster down

2021-10-13 Thread Jorge JP
Hello,

We currently have a ceph cluster in Proxmox, with 5 ceph nodes with the public 
and private network correctly configured and without problems. The state of 
ceph was optimal.

We had prepared a new server to add to the ceph cluster. We did the first step 
of installing Proxmox with the same version. I was at the point where I was 
setting up the network.

For this step, I did was connect by SSH to the new server and copy the network 
configuration of one of the ceph nodes to this new one. Of course, changing the 
ip addresses.

On each ceph node, I have 2 ports configured on two different switches 
configured with bond.

In the new ceph node, I did not have the ports configured with the bond, in the 
swith (cisco). My intention was once the configuration file was saved, restart 
the network service from the server
and then go to configure the ports on the switch. I have to configure the ports 
in the last step, because the server configured without bond, and If I 
configure the ports with bond I lost access to the server.

What happened when restarting the network service is that I lost access to the 
cluster. I couldn't access any of the 5 servers that are part of the  ceph 
cluster. Also, 2 of 3 hypervisors
that we have in the proxmox cluster were restarted directly.

Why has this happened if the new server is not yet inside the ceph cluster on 
the proxmox cluster and I don't even have the ports configured on my switch?

Do you have any idea?

I do not understand, if now I go and take any server and configure an IP of the 
cluster network and even if the ports are not even configured, will the cluster 
knock me down?

I recovered the cluster by phisically removing the cables from the new server.

Thanks a lot and sorry for my english...
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Size of cluster

2021-08-09 Thread Jorge JP
Hello, this is my osd tree:

ID   CLASS  WEIGHT TYPE NAME
 -1 312.14557  root default
 -3  68.97755  host pveceph01
  3hdd   10.91409  osd.3
 14hdd   16.37109  osd.14
 15hdd   16.37109  osd.15
 20hdd   10.91409  osd.20
 23hdd   10.91409  osd.23
  0ssd3.49309  osd.0
 -5  68.97755  host pveceph02
  4hdd   10.91409  osd.4
 13hdd   16.37109  osd.13
 16hdd   16.37109  osd.16
 21hdd   10.91409  osd.21
 24hdd   10.91409  osd.24
  1ssd3.49309  osd.1
 -7  68.97755  host pveceph03
  6hdd   10.91409  osd.6
 12hdd   16.37109  osd.12
 17hdd   16.37109  osd.17
 22hdd   10.91409  osd.22
 25hdd   10.91409  osd.25
  2ssd3.49309  osd.2
-13  52.60646  host pveceph04
  9hdd   10.91409  osd.9
 11hdd   16.37109  osd.11
 18hdd   10.91409  osd.18
 26hdd   10.91409  osd.26
  5ssd3.49309  osd.5
-16  52.60646  host pveceph05
  8hdd   10.91409  osd.8
 10hdd   16.37109  osd.10
 19hdd   10.91409  osd.19
 27hdd   10.91409  osd.27
  7ssd3.49309  osd.7

Sorry, but how I check the failure domain? I seem to remember that my failure 
domain is host.

Regards.


De: Robert Sander 
Enviado: lunes, 9 de agosto de 2021 13:40
Para: ceph-users@ceph.io 
Asunto: [ceph-users] Re: Size of cluster

Hi,

Am 09.08.21 um 12:56 schrieb Jorge JP:

> 15 x 12TB = 180TB
> 8 x 18TB = 144TB

How are these distributed across your nodes and what is the failure
domain? I.e. how will Ceph distribute data among them?

> The raw size of this cluster (HDD) should be 295TB after format but the size 
> of my "primary" pool (2/1) in this moment is:

A pool with a size of 2 and a min_size of 1 will lead to data loss.

> 53.50% (65.49 TiB of 122.41 TiB)
>
> 122,41TiB multiplied by replication of 2 is 244TiB, not 295TiB.
>
> How can use all size of the class?

If you have 3 nodes with each 5x 12TB (60TB) and 2 nodes with each 4x
18TB (72TB) the maximum usable capacity will not be the sum of all
disks. Remember that Ceph tries to evenly distribute the data.

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Size of cluster

2021-08-09 Thread Jorge JP
Hello,

I have a ceph cluster with 5 nodes. I have 23 osds distributed in these one 
with hdd class. The disk size are:

15 x 12TB = 180TB
8 x 18TB = 144TB

Result of execute "ceph df" command:

--- RAW STORAGE ---
CLASS  SIZE AVAILUSED RAW USED  %RAW USED
hdd295 TiB  163 TiB  131 TiB   131 TiB  44.55
ssd 17 TiB   17 TiB  316 GiB   324 GiB   1.81
TOTAL  312 TiB  181 TiB  131 TiB   132 TiB  42.16

--- POOLS ---
POOLID  PGS  STORED   OBJECTS  USED %USED  MAX 
AVAIL
device_health_metrics11   13 MiB5   39 MiB  0 
40 TiB
.rgw.root44  1.5 KiB4  768 KiB  0 
38 TiB
default.rgw.meta 64  4.7 KiB   12  1.9 MiB  0 
38 TiB
rbd  8  512  1.4 KiB4  384 KiB  0 
38 TiB
default.rgw.buckets.data12   32   10 GiB2.61k   31 GiB   0.03 
38 TiB
default.rgw.log 13  128   35 KiB  2076 MiB  0 
38 TiB
default.rgw.control 144  0 B8  0 B  0 
38 TiB
default.rgw.buckets.non-ec  15  128 27 B1  192 KiB  0 
38 TiB
default.rgw.buckets.index   184  1.1 MiB2  3.3 MiB  0
5.4 TiB
default.rgw.buckets.ssd.index   218  0 B0  0 B  0
5.4 TiB
default.rgw.buckets.ssd.data228  0 B0  0 B  0
5.4 TiB
default.rgw.buckets.ssd.non-ec  238  0 B0  0 B  0
5.4 TiB
POOL-HDD32  512   65 TiB   17.28M  131 TiB  53.51 
57 TiB
POOL_SSD_2_134   32  157 GiB  296.94k  316 GiB   1.86
8.1 TiB

The raw size of this cluster (HDD) should be 295TB after format but the size of 
my "primary" pool (2/1) in this moment is:

53.50% (65.49 TiB of 122.41 TiB)

122,41TiB multiplied by replication of 2 is 244TiB, not 295TiB.

How can use all size of the class?

Thanks a lot.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Strategy for add new osds

2021-06-15 Thread Jorge JP
Hello,

I have a ceph cluster with 5 nodes (1 hdd each node). I want to add 5 more 
drives (hdd) to expand my cluster. What is the best strategy for this?

I will add each drive in each node but is a good strategy add one drive and 
wait to rebalance the data to new osd for add new osd? or maybe.. I should be 
add the 5 drives without wait rebalancing and ceph rebalancing the data to all 
new osd?

Thank you.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io