Hi folks.
I had a quick search but found nothing concrete on this so thought I would ask.
We currently have a 4 host CEPH cluster with an NVMe pool (1 OSD per host) and
an HDD Pool (1 OSD per host). Both OSD's use a separate NVMe for DB/WAL. These
machines are identical (Homogenous) and are
Hi all,
I've been reading through this email list for a while now, but one thing that
I'm curious about is why a lot of installations out there aren't upgraded to
the latest version of CEPH (Quincy).
What are the main reasons for not upgrading to the latest and greatest?
Thanks.
Tino
This
Hi folks.
Just looking for some up to date advice please from the collective on how best
to set up CEPH on 5 Proxmox hosts each configured with the following:
AMD Ryzen 7 5800X CPU
64GB RAM
2x SSD (as ZFS boot disk for Proxmox)
1x 500GB NVMe for DB/WAL
1x 1TB NVMe as an OSD
1x 16TB SATA HDD as
TOTAL 11 TiB 7.2 TiB 7.2 TiB 196 MiB 28 GiB 3.7
TiB 66.00
MIN/MAX VAR: 0.94/1.06 STDDEV: 2.80
-Original Message-
From: Janne Johansson
Sent: 10 October 2022 07:52
To: Tino Todino
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Inherited CEPH nightmare
(as there is only about 7TBs worth)
and then set-up CEPH from scratch following some kind of best practice guide.
Anyway, any help would be gratefully received.
Thanks for reading.
Kind regards
Tino Todino
___
ceph-users mailing list -- ceph-users@ceph.io