Hi,
I'm curious - how did you tell that the separate WAL+DB volume was slowing
things down? I assume you did some benchmarking - is there any chance you'd
be willing to share results? (Or anybody else that's been in a similar
situation).
What sorts of devices are you using for the WAL+DB, versus
Hi,
This is awesome news! =).
I did hear mention before about Crimson and Pacific - does anybody know
what the current state of things is?
I see there's a doc page for it here -
https://docs.ceph.com/en/latest/dev/crimson/crimson/
Are we able to use Crimson yet in Pacific? (As in, do we need
depend on the size of the data partition :-)
>
> 14 марта 2020 г. 22:50:37 GMT+03:00, Victor Hooi
> пишет:
>>
>> Hi,
>>
>> I'm building a 4-node Proxmox cluster, with Ceph for the VM disk storage.
>>
>> On each node, I have:
>>
>>
>>
Hi,
I'm building a 4-node Proxmox cluster, with Ceph for the VM disk storage.
On each node, I have:
- 1 x 512Gb M.2 SSD (for Proxmox/boot volume)
- 1 x 960GB Intel Optane 905P (for Ceph WAL/DB)
- 6 x 1.92TB Intel S4610 SATA SSD (for Ceph OSD)
I'm using the Proxmox "pveceph" command