[ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD

2023-03-29 Thread Calhoun, Patrick
I think that the backported fix for this issue made it into ceph v16.2.11.

https://ceph.io/en/news/blog/2023/v16-2-11-pacific-released/


"ceph-volume: Pacific backports (pr#47413, Guillaume Abrioux, Zack Cerza, 
Arthur Outhenin-Chalandre)"

https://github.com/ceph/ceph/pull/47413/commits/4252cc44211f0ccebf388374744eaa26b32854d3

-Patrick

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: How to make CephFS a tiered file system?

2021-07-20 Thread Calhoun, Patrick
At a glance, it looks to me like cache-pools ( 
https://docs.ceph.com/en/latest/dev/cache-pool/ ) can be somewhat HSM-like, but 
on an object level rather than a file level.

Side-question:
Is that cache pool approach different from the seldom-recommended cache-tiering 
( https://docs.ceph.com/en/latest/rados/operations/cache-tiering/ ), with 
respect to the "words of caution" and limited number of known-good workloads?

-Patrick

From: Brady Deetz 
Sent: Tuesday, July 20, 2021 11:32 AM
To: huxia...@horebdata.cn 
Cc: ceph-users 
Subject: [ceph-users] Re: How to make CephFS a tiered file system?

What you are proposing is called a hierarchical storage manager (HSM).
Cephfs does not have a built in HSM. Would be amazing if it did though.

On Mon, Jul 19, 2021, 4:28 PM huxia...@horebdata.cn 
wrote:

> Dear Cepher,
>
> I have a requirement to use CephFS as a tiered file system, i.e. the data
> will be first stored onto an all-flash pool (using SSD OSDs), and then
> automatically moved to an EC coded pool (using HDD OSDs) according to
> threshold on file creation time (or access time). The reason for such a
> file system is due to the fact that, files are created and most likely
> accessed within the first 6 months or 1 year, and after that period, those
> files have much less chance to be accessed and thus could be moved to a
> slower and cheap pool.
>
> Does CephFS already support such a tiered feature? and if yes, how to
> implement such feature with a pool of all SSD pool and a pool of EC-coded
> HDD pool?
>
> Any suggestion, ideas, comments are highly appreciated,
>
> best regards,
>
> samuel
>
>
>
> huxia...@horebdata.cn
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: How to make CephFS a tiered file system?

2021-07-22 Thread Calhoun, Patrick
Do I understand correctly that to relocate a single file to a different pool, 
the process would be:

setfattr -n ceph.dir.layout.pool -v NewPool original_file_name
cp -a original_file_name .hidden_file_name && mv -f .hidden_file_name 
original_file_name

-Patrick

From: Patrick Donnelly 
Sent: Thursday, July 22, 2021 5:03 PM
To: huxia...@horebdata.cn 
Cc: ceph-users 
Subject: [ceph-users] Re: How to make CephFS a tiered file system?

On Wed, Jul 21, 2021 at 1:49 PM huxia...@horebdata.cn
 wrote:
>
> Dear Patrick,
>
> Thanks a lot for pointing out the HSM ticket. We will see whether we have the 
> resource to do something with the ticket.
>
> I am thinking of a temporary solution for HSM using cephfs client commands. 
> The following command
>'setfattr -n ceph.dir.layout.pool -v NewPool Folder'
> will set the specified folder Folder to be written to NewPool.
>
> If i understand correctly, the new file written to Folder will be directed to 
> NewPool, but how about the old files that already exist in FOLDER before 
> executing the above command?

Correct.

> Should i mannually migrate those old files, and how?

Copy them.


--
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] cephadm automatic sizing of WAL/DB on SSD

2022-07-28 Thread Calhoun, Patrick
Hi,

I'd like to understand if the following behaviour is a bug.
I'm running ceph 16.2.9.

In a new OSD node with 24 hdd (16 TB each) and 2 ssd (1.44 TB each), I'd like 
to have "ceph orch" allocate WAL and DB on the ssd devices.

I use the following service spec:
spec:
  data_devices:
rotational: 1
size: '14T:'
  db_devices:
rotational: 0
size: '1T:'
  db_slots: 12

This results in each OSD having a 60GB volume for WAL/DB, which equates to 50% 
total usage in the VG on each ssd, and 50% free.
I honestly don't know what size to expect, but exactly 50% of capacity makes me 
suspect this is due to a bug:
https://tracker.ceph.com/issues/54541
(In fact, I had run into this bug when specifying block_db_size rather than 
db_slots)

Questions:
  Am I being bit by that bug?
  Is there a better approach, in general, to my situation?
  Are DB sizes still governed by the rocksdb tiering? (I thought that this was 
mostly resolved by https://github.com/ceph/ceph/pull/29687 )
  If I provision a DB/WAL logical volume size to 61GB, is that effectively a 
30GB database, and 30GB of extra room for compaction?

Thanks,
Patrick
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD

2022-07-29 Thread Calhoun, Patrick
Thanks, Arthur,

I think you are right about that bug looking very similar to what I've 
observed. I'll try to remember to update the list once the fix is merged and 
released and I get a chance to test it.

I'm hoping somebody can comment on what are ceph's current best practices for 
sizing WAL/DB volumes, considering rocksdb levels and compaction.

-Patrick


From: Arthur Outhenin-Chalandre 
Sent: Friday, July 29, 2022 2:11 AM
To: ceph-users@ceph.io 
Subject: [ceph-users] Re: cephadm automatic sizing of WAL/DB on SSD

Hi Patrick,

On 7/28/22 16:22, Calhoun, Patrick wrote:
> In a new OSD node with 24 hdd (16 TB each) and 2 ssd (1.44 TB each), I'd like 
> to have "ceph orch" allocate WAL and DB on the ssd devices.
>
> I use the following service spec:
> spec:
>   data_devices:
> rotational: 1
> size: '14T:'
>   db_devices:
> rotational: 0
> size: '1T:'
>   db_slots: 12
>
> This results in each OSD having a 60GB volume for WAL/DB, which equates to 
> 50% total usage in the VG on each ssd, and 50% free.
> I honestly don't know what size to expect, but exactly 50% of capacity makes 
> me suspect this is due to a bug:
> https://tracker.ceph.com/issues/54541
> (In fact, I had run into this bug when specifying block_db_size rather than 
> db_slots)
>
> Questions:
>   Am I being bit by that bug?
>   Is there a better approach, in general, to my situation?
>   Are DB sizes still governed by the rocksdb tiering? (I thought that this 
> was mostly resolved by https://github.com/ceph/ceph/pull/29687 )
>   If I provision a DB/WAL logical volume size to 61GB, is that effectively a 
> 30GB database, and 30GB of extra room for compaction?

I don't use cephadm, but it's maybe related to this regression:
https://tracker.ceph.com/issues/56031. At list the symptoms looks very
similar...

Cheers,

--
Arthur Outhenin-Chalandre
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io