I never used fqdn this way, but there is an option for cephadm bootstrap
command
--allow-fqdn-hostname
allow hostname that is fully-qualified (contains
".")
Worth checking. Not sure what's behind.
Thanks
On Wed, 8 Jan 2025 at 12:14, Piotr Pisz <[email protected]> wrote:
> Hi,
>
> We add hosts to the cluster using fqdn, manually (ceph orch host add)
> everything works fine.
> However, if we use the spec file as below, the whole thing falls apart.
>
> ---
> service_type: host
> addr: xx.xx.xx.xx
> hostname: ceph001.xx002.xx.xx.xx.com
> location:
> root: xx002
> rack: rack01
> labels:
> - osd
> - rgw
> ---
> service_type: osd
> service_id: object_hdd
> service_name: osd.object_hdd
> placement:
> host_pattern: ceph*
> crush_device_class: object_hdd
> spec:
> data_devices:
> rotational: 1
> db_devices:
> rotational: 0
> size: '3000G:'
> ---
> service_type: osd
> service_id: index_nvme
> service_name: osd.index_nvme
> placement:
> host_pattern: ceph*
> crush_device_class: index_nvme
> spec:
> data_devices:
> rotational: 0
> size: ':900G'
>
> Applying this spec results in two hosts, one fqdn and the other short:
>
> root@mon001(xx002):~/cephadm# ceph osd df tree
> ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP
> META
> AVAIL %USE VAR PGS STATUS TYPE NAME
> -4 0 - 0 B 0 B 0 B 0 B
> 0 B 0 B 0 0 - root dias002
> -3 0 - 0 B 0 B 0 B 0 B
> 0 B 0 B 0 0 - rack rack01
> -2 0 - 0 B 0 B 0 B 0 B
> 0 B 0 B 0 0 - host
> ceph001.xx002.xx.xx.xx.com
> -1 662.71497 - 663 TiB 7.0 TiB 102 MiB 37 KiB 1.7
> GiB 656 TiB 1.05 1.00 - root default
> -9 662.71497 - 663 TiB 7.0 TiB 102 MiB 37 KiB 1.7
> GiB 656 TiB 1.05 1.00 - host ceph001
> 36 index_nvme 0.87329 1.00000 894 GiB 33 MiB 2.7 MiB 1 KiB 30
> MiB 894 GiB 0.00 0.00 0 up osd.36
> 0 object_hdd 18.38449 1.00000 18 TiB 199 GiB 2.7 MiB 1 KiB 56
> MiB 18 TiB 1.06 1.00 0 up osd.0
> 1 object_hdd 18.38449 1.00000 18 TiB 199 GiB 2.7 MiB 1 KiB 74
> MiB 18 TiB 1.06 1.00 0 up osd.1
> 2 object_hdd 18.38449 1.00000 18 TiB 199 GiB 2.7 MiB 1 KiB 56
> MiB 18 TiB 1.06 1.00 0 up osd.2
>
> This looks like a bug, but I'm not sure, maybe someone has encountered
> something similar?
>
> Regards,
> Piotr
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
--
Ćukasz Borek
[email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]