On 1/31/24 18:53, Gregory Farnum wrote:
The docs recommend a fast SSD pool for the CephFS *metadata*, but the
default data pool can be more flexible. The backtraces are relatively
small — it's an encoded version of the path an inode is located at,
plus the RADOS hobject, which is probably more of the space usage. So
it should fit fine in your SSD pool, but if all the cephfs file data
is living in the hard drive pool I'd just set it up there.

Right, I wrote *ssd* because our replicated pool is on SSD disks and the docs say:

"....If erasure-coded pools are planned for file system data, it is best to configure the default as a *replicated pool* to improve small-object write and read performance when updating backtraces..."

so of course replicated HDD pool would also fit there, I assume.

Well we plan to use the 85 TiB rep ssd pool for user homes (default pool) and the 3 PiB EC HDD pool for data, so most of the data will eventually be stored on the EC pool, so I was thinking if the need for storing all inode backtrace information on the default pool will substantially fill up the pool where also the user homes are going to end.

Thanks
  Dietmar


On Tue, Jan 30, 2024 at 2:03 AM Dietmar Rieder
<dietmar.rie...@i-med.ac.at> wrote:

Hello,

I have a question regarding the default pool of a cephfs.

According to the docs it is recommended to use a fast ssd replicated
pool as default pool for cephfs. I'm asking what are the space
requirements for storing the inode backtrace information?

Let's say I have a 85 TiB replicated ssd pool (hot data) and as 3 PiB EC
data pool (cold data).

Does it make sense to create a third pool as default pool which only
holds the inode backtrace information (what would be a good size), or is
it OK to use the ssd pool as default pool?

Thanks
     Dietmar

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to