Hi,
I am using cephadm and deployed all my OSDs using a spec file
I just noticed that osd_ssds has only one placement host instead of all 7
How do I add the other placement hosts ?
Re running spec file with "hosts = *" and --dry-run does not indicated it
will do anything
Many thanks
Steven
This is the spec file
# SSD OSDs
service_type: osd
service_id: ssd_osds
placement:
host_pattern: "*"
crush_device_class: ssd_class
spec:
data_devices:
rotational: 0
size: '6T:7T'
The relevant part of ceph orch ls
osd.all-available-devices 4 8m ago
16h *
osd.hdd_osds 74 8m ago
16h *
osd.nvme_osds 25 8m ago
5w *
osd.ssd_osds 84 8m ago
3w ceph-host-1
The exported part of osd.ssd_osds
---
service_type: osd
service_id: ssd_osds
service_name: osd.ssd_osds
placement:
hosts:
- ceph-host-1
spec:
crush_device_class: ssd_class
data_devices:
rotational: 0
size: 6T:7T
filter_logic: AND
objectstore: bluestore
The result of applying the osd-ssd-config.yml ( with --dry-run)
WARNING! Dry-Runs are snapshots of a certain point in time and are bound
to the current inventory setup. If any of these conditions change, the
preview will be invalid. Please make sure to have a minimal
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE |NAME |ADD_TO |REMOVE_FROM |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+------+------+------+----+-----+
|SERVICE |NAME |HOST |DATA |DB |WAL |
+---------+------+------+------+----+-----+
+---------+------+------+------+----+-----+
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]