[ceph-users] Re: is it possible to remove the db+wal from an external device (nvme)

2021-09-30 Thread Victor Hooi
Hi, I'm curious - how did you tell that the separate WAL+DB volume was slowing things down? I assume you did some benchmarking - is there any chance you'd be willing to share results? (Or anybody else that's been in a similar situation). What sorts of devices are you using for the WAL+DB, versus

[ceph-users] Re: [Ceph-maintainers] v16.2.0 Pacific released

2021-04-01 Thread Victor Hooi
Hi, This is awesome news! =). I did hear mention before about Crimson and Pacific - does anybody know what the current state of things is? I see there's a doc page for it here - https://docs.ceph.com/en/latest/dev/crimson/crimson/ Are we able to use Crimson yet in Pacific? (As in, do we need

[ceph-users] Re: Advice on sizing WAL/DB cluster for Optane and SATA SSD disks.

2020-03-15 Thread Victor Hooi
depend on the size of the data partition :-) > > 14 марта 2020 г. 22:50:37 GMT+03:00, Victor Hooi > пишет: >> >> Hi, >> >> I'm building a 4-node Proxmox cluster, with Ceph for the VM disk storage. >> >> On each node, I have: >> >> >>

[ceph-users] Advice on sizing WAL/DB cluster for Optane and SATA SSD disks.

2020-03-14 Thread Victor Hooi
Hi, I'm building a 4-node Proxmox cluster, with Ceph for the VM disk storage. On each node, I have: - 1 x 512Gb M.2 SSD (for Proxmox/boot volume) - 1 x 960GB Intel Optane 905P (for Ceph WAL/DB) - 6 x 1.92TB Intel S4610 SATA SSD (for Ceph OSD) I'm using the Proxmox "pveceph" command