Philip;

There's isn't any documentation that shows specifically how to do that, though 
the below comes close.

Here's the documentation, for Nautilus, on CRUSH operations:
https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/

About a third of the way down the page is a discussion of "Device Classes."  In 
that sections it talks about creating CRUSH rules that target certain device 
classes (hdd, ssd, nvme, by default).

Once you have a rule, you can configure a pool to use the rule.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
dhils...@performair.com 
www.PerformAir.com


-----Original Message-----
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Philip 
Brown
Sent: Monday, December 16, 2019 3:43 PM
To: Nathan Fish
Cc: ceph-users
Subject: Re: [ceph-users] Separate disk sets for high IO?

Sounds very useful.

Any online example documentation for this?
havent found any so far?


----- Original Message -----
From: "Nathan Fish" <lordci...@gmail.com>
To: "Marc Roos" <m.r...@f1-outsourcing.eu>
Cc: "ceph-users" <ceph-users@lists.ceph.com>, "Philip Brown" <pbr...@medata.com>
Sent: Monday, December 16, 2019 2:07:44 PM
Subject: Re: [ceph-users] Separate disk sets for high IO?

Indeed, you can set device class to pretty much arbitrary strings and
specify them. By default, 'hdd', 'ssd', and I think 'nvme' are
autodetected - though my Optanes showed up as 'ssd'.

On Mon, Dec 16, 2019 at 4:58 PM Marc Roos <m.r...@f1-outsourcing.eu> wrote:
>
>
>
> You can classify osd's, eg as ssd. And you can assign this class to a
> pool you create. This way you have have rbd's running on only ssd's. I
> think you have also a class for nvme and you can create custom classes.
>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to