...@performair.com
www.PerformAir.com
From: Paul Emmerich [mailto:paul.emmer...@croit.io]
Sent: Thursday, January 16, 2020 3:23 PM
To: Bastiaan Visser
Cc: Dominic Hilsbos; Ceph Users
Subject: Re: [ceph-users] [External Email] RE: Beginner questions
Discussing DB size requirements without knowing the exact
Discussing DB size requirements without knowing the exact cluster
requirements doesn't work.
Here are some real-world examples:
cluster1: CephFS, mostly large files, replicated x3
0.2% used for metadata
cluster2: radosgw, mix between replicated and erasure, mixed file sizes
(lots of tiny files,
Dave made a good point WAL + DB might end up a little over 60G, I would
probably go with ~70Gig partitions /LV's per OSD in your case. (if the nvme
drive is smart enough to spread the writes over all available capacity,
mort recent nvme's are). I have not yet seen a WAL larger or even close to
than
Dave;
I don't like reading inline responses, so...
I have zero experience with EC pools, so I won't pretend to give advice in that
area.
I would think that small NVMe for DB would be better than nothing, but I don't
know.
Once I got the hang of building clusters, it was relatively easy to wip
Dominic,
We ended up with a 1.6TB PCIe NVMe in each node. For 8 drives this
worked out to a DB size of something like 163GB per OSD. Allowing for
expansion to 12 drives brings it down to 124GB. So maybe just put the
WALs on NVMe and leave the DBs on the platters?
Understood that we will wan
Paul, Bastiaan,
Thank you for your responses and for alleviating my concerns about
Nautilus. The good news is that I can still easily move up to Debian
10. BTW, I assume that this is still with the 4.19 kernel?
Also, I'd like to inject additional customizations into my Debian
configs via c