[ceph-users] Re: ceph orch and mixed SSD/rotating disks

2021-02-18 Thread Philip Brown
Yes, if I go through that VERY Long guide, and figure out I want to create a 
file,

"howIreallywantOSDs.yml":
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  rotational: 1
db_devices:
  rotational: 0

and then remember to deploy with

ceph orch apply osd -i howIreallywantOSDs.yml --host=somenewhostname

then it seems to work quite nicely.

I just find it rather surprising that the official fancy new orchestration tool 
doesnt do the obvious right thing out of the box. 
(Whereas ceph-ansible does)



- Original Message -
From: "Tony Liu" 
To: "Philip Brown" , "ceph-users" 
Sent: Thursday, February 18, 2021 9:48:11 AM
Subject: Re: ceph orch and mixed SSD/rotating disks

It may help if you could share how you added those OSDs.
This guide works for me.
https://docs.ceph.com/en/latest/cephadm/drivegroups/

Tony

From: Philip Brown 
Sent: February 17, 2021 09:30 PM
To: ceph-users
Subject: [ceph-users] ceph orch and mixed SSD/rotating disks

I'm coming back to trying mixed SSD+spinning disks after maybe a year.

It was my vague recollection, that if you told ceph "go auto configure all the 
disks", it would actually automatically carve up the SSDs into the appropriate 
number of LVM segments, and use them as WAL devices for each hdd based OSD on 
the system.

Was I wrong?
Because when I tried to bring up a brand new cluster (Octopus, cephadm 
bootstrapped), with multiple nodes and multiple disks per node...
it seemed to bring up the SSDS as just another set of OSDs.

it clearly recognized them as ssd. The output of "ceph orch device ls" showed 
them as ssd vs hdd for the others.
It just...didnt use them as I expected.

?

Maybe I was thinking of ceph ansible.
Is there not a nice way to do this with the new cephadm based "ceph orch"?
I would rather not have to go write json files or whatever by hand, when a 
computer should be perfectly capable of auto generating this stuff itself




--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbr...@medata.com| www.medata.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph orch and mixed SSD/rotating disks

2021-02-18 Thread Tony Liu
It may help if you could share how you added those OSDs.
This guide works for me.
https://docs.ceph.com/en/latest/cephadm/drivegroups/

Tony

From: Philip Brown 
Sent: February 17, 2021 09:30 PM
To: ceph-users
Subject: [ceph-users] ceph orch and mixed SSD/rotating disks

I'm coming back to trying mixed SSD+spinning disks after maybe a year.

It was my vague recollection, that if you told ceph "go auto configure all the 
disks", it would actually automatically carve up the SSDs into the appropriate 
number of LVM segments, and use them as WAL devices for each hdd based OSD on 
the system.

Was I wrong?
Because when I tried to bring up a brand new cluster (Octopus, cephadm 
bootstrapped), with multiple nodes and multiple disks per node...
it seemed to bring up the SSDS as just another set of OSDs.

it clearly recognized them as ssd. The output of "ceph orch device ls" showed 
them as ssd vs hdd for the others.
It just...didnt use them as I expected.

?

Maybe I was thinking of ceph ansible.
Is there not a nice way to do this with the new cephadm based "ceph orch"?
I would rather not have to go write json files or whatever by hand, when a 
computer should be perfectly capable of auto generating this stuff itself




--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbr...@medata.com| www.medata.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io