[ceph-users] Re: What is the best way to use disks with different sizes

2023-07-04 Thread Anthony D'Atri
You were very clear. Create one pool containing all drives. You can deploy more than one OSD on an NVMe drive, using a fraction of the size. Not all drives have to have the same number of OSDs. I you deploy 2x OSDs on the 7.6TB and 1x OSDs on the 3.8TB, you will have 15 OSDs total, each 3.8T

[ceph-users] Re: What is the best way to use disks with different sizes

2023-07-04 Thread wodel youchi
Hi and thanks, Maybe I was not able to express myself correctly. I have 3 nodes, and I will be using 3 replicas for the data, which will be VMs disks. *Each node has** 04 disks* : - 03 nvme disks of 3.8Tb - and 01 nvme disk of 7.6Tb All three nodes are equivalent. As mentioned above, one pool

[ceph-users] Re: What is the best way to use disks with different sizes

2023-07-04 Thread Anthony D'Atri
There aren’t enough drives to split into multiple pools. Deploy 1 OSD on each of the 3.8T devices and 2 OSDs on each of the 7.6s. Or, alternately, 2 and 4. > On Jul 4, 2023, at 3:44 AM, Eneko Lacunza wrote: > > Hi, > > El 3/7/23 a las 17:27, wodel youchi escribió: >> I will be deploying a Pr

[ceph-users] Re: What is the best way to use disks with different sizes

2023-07-04 Thread Eneko Lacunza
Hi, El 3/7/23 a las 17:27, wodel youchi escribió: I will be deploying a Proxmox HCI cluster with 3 nodes. Each node has 3 nvme disks of 3.8Tb each and a 4th nvme disk of 7.6Tb. Technically I need one pool. Is it good practice to use all disks to create the one pool I need, or is it better to cr