[ceph-users] Performance impact of Heterogeneous environment

2024-01-17 Thread Tino Todino
Hi folks. I had a quick search but found nothing concrete on this so thought I would ask. We currently have a 4 host CEPH cluster with an NVMe pool (1 OSD per host) and an HDD Pool (1 OSD per host). Both OSD's use a separate NVMe for DB/WAL. These machines are identical (Homogenous) and are

[ceph-users] CEPH Version choice

2023-05-15 Thread Tino Todino
Hi all, I've been reading through this email list for a while now, but one thing that I'm curious about is why a lot of installations out there aren't upgraded to the latest version of CEPH (Quincy). What are the main reasons for not upgrading to the latest and greatest? Thanks. Tino This

[ceph-users] 5 host setup with NVMe's and HDDs

2023-03-29 Thread Tino Todino
Hi folks. Just looking for some up to date advice please from the collective on how best to set up CEPH on 5 Proxmox hosts each configured with the following: AMD Ryzen 7 5800X CPU 64GB RAM 2x SSD (as ZFS boot disk for Proxmox) 1x 500GB NVMe for DB/WAL 1x 1TB NVMe as an OSD 1x 16TB SATA HDD as

[ceph-users] Re: Inherited CEPH nightmare

2022-10-11 Thread Tino Todino
TOTAL 11 TiB 7.2 TiB 7.2 TiB 196 MiB 28 GiB 3.7 TiB 66.00 MIN/MAX VAR: 0.94/1.06 STDDEV: 2.80 -Original Message- From: Janne Johansson Sent: 10 October 2022 07:52 To: Tino Todino Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Inherited CEPH nightmare

[ceph-users] Inherited CEPH nightmare

2022-10-07 Thread Tino Todino
(as there is only about 7TBs worth) and then set-up CEPH from scratch following some kind of best practice guide. Anyway, any help would be gratefully received. Thanks for reading. Kind regards Tino Todino ___ ceph-users mailing list -- ceph-users@ceph.io