Thanks
Ignazio

Il Ven 23 Lug 2021, 18:29 Dimitri Savineau <dsavi...@redhat.com> ha scritto:

> It's probably better to create another thread for this instead of asking
> on an existing one.
>
> Anyway, even if the documentation says `cluster_network` [1] then both
> options work fine (with and without the underscore).
> And I'm pretty sure this applies to all config options.
>
> [1]
> https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/#id3
>
> Regards,
>
> Dimitri
>
> On Fri, Jul 23, 2021 at 12:03 PM Ignazio Cassano <ignaziocass...@gmail.com>
> wrote:
>
>> Hello, I want to ask if the correct config in ceph.conf for cluster
>> network is:
>> cluster network =
>>
>> Or
>>
>> cluster_network =
>>
>> Thanks
>>
>> Il Ven 23 Lug 2021, 17:36 Dimitri Savineau <dsavi...@redhat.com> ha
>> scritto:
>>
>>> Hi,
>>>
>>> This looks similar to https://tracker.ceph.com/issues/46687
>>>
>>> Since you want to use hdd devices to bluestore data and ssd devices for
>>> bluestore db, I would suggest using the rotational [1] filter isn't
>>> dealing
>>> with the size filter.
>>>
>>> ---
>>> service_type: osd
>>> service_id: osd_spec_default
>>> placement:
>>>   host_pattern: '*'
>>> data_devices:
>>>   rotational: 1
>>> db_devices:
>>>   rotational: 0
>>> ...
>>>
>>> Could you give this a try ?
>>>
>>> [1] https://docs.ceph.com/en/latest/cephadm/osd/#rotational
>>>
>>> Regards,
>>>
>>> Dimitri
>>>
>>> On Fri, Jul 23, 2021 at 7:12 AM Gargano Andrea <
>>> andrea.garg...@dgsspa.com>
>>> wrote:
>>>
>>> > Hi all,
>>> > we are trying to install ceph on ubuntu 20.04 but we are not able to
>>> > create OSD.
>>> > Entering in cephadm shell we can see the following:
>>> >
>>> > root@tst2-ceph01:/# ceph -s
>>> >   cluster:
>>> >     id:     8b937a98-eb86-11eb-8509-c5c80111fd98
>>> >     health: HEALTH_ERR
>>> >             Module 'cephadm' has failed: No filters applied
>>> >             OSD count 0 < osd_pool_default_size 3
>>> >
>>> >   services:
>>> >     mon: 3 daemons, quorum tst2-ceph01,tst2-ceph03,tst2-ceph02 (age 2h)
>>> >     mgr: tst2-ceph01.kwyejx(active, since 3h), standbys:
>>> tst2-ceph02.qrpuzp
>>> >     osd: 0 osds: 0 up (since 115m), 0 in (since 105m)
>>> >
>>> >   data:
>>> >     pools:   0 pools, 0 pgs
>>> >     objects: 0 objects, 0 B
>>> >     usage:   0 B used, 0 B / 0 B avail
>>> >     pgs:
>>> >
>>> >
>>> > root@tst2-ceph01:/# ceph orch device ls
>>> > Hostname     Path      Type  Serial                            Size
>>> >  Health   Ident  Fault  Available
>>> > tst2-ceph01  /dev/sdb  hdd   600508b1001c1960d834c222fb64f2ea  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph01  /dev/sdc  hdd   600508b1001c36e812fb5d14997f5f47  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph01  /dev/sdd  hdd   600508b1001c01a0297ac2c5e8039063  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph01  /dev/sde  hdd   600508b1001cf4520d0f0155d0dd31ad  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph01  /dev/sdf  hdd   600508b1001cc911d4f570eba568a8d0  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph01  /dev/sdg  hdd   600508b1001c410bd38e6c55807bea25  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph01  /dev/sdh  ssd   600508b1001cdb21499020552589eadb   400G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph02  /dev/sdb  hdd   600508b1001ce1f33b63f8859aeac9b4  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph02  /dev/sdc  hdd   600508b1001c0b4dbfa794d2b38f328e  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph02  /dev/sdd  hdd   600508b1001c145b8de4e4e7cc9129d5  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph02  /dev/sde  hdd   600508b1001c1d81d0aaacfdfd20f5f1  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph02  /dev/sdf  hdd   600508b1001c28d2a2c261449ca1a3cc  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph02  /dev/sdg  hdd   600508b1001c1f9a964b1513f70b51b3  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph02  /dev/sdh  ssd   600508b1001c8040dd5cf17903940177   400G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph03  /dev/sdb  hdd   600508b1001c900ef43d7745db17d5cc  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph03  /dev/sdc  hdd   600508b1001cf1b79f7dc2f79ab2c90b  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph03  /dev/sdd  hdd   600508b1001c83c09fe03eb17e555f5f  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph03  /dev/sde  hdd   600508b1001c9c4c5db12fabf54a4ff3  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph03  /dev/sdf  hdd   600508b1001cdaa7dc09d751262e2cc9  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph03  /dev/sdg  hdd   600508b1001c8f435a08b7eae4a1323e  1200G
>>> > Unknown  N/A    N/A    Yes
>>> > tst2-ceph03  /dev/sdh  ssd   600508b1001c5e24f822d6790a5df65b   400G
>>> > Unknown  N/A    N/A    Yes
>>> >
>>> >
>>> > we wrote the following spec file:
>>> >
>>> > service_type: osd
>>> > service_id: osd_spec_default
>>> > placement:
>>> >   host_pattern: '*'
>>> > data_devices:
>>> >   size: '1200GB'
>>> > db_devices:
>>> >   size: '400GB'
>>> >
>>> > but running, the following appears:
>>> >
>>> > root@tst2-ceph01:/# ceph orch apply osd -i /spec.yml --dry-run
>>> > WARNING! Dry-Runs are snapshots of a certain point in time and are
>>> bound
>>> > to the current inventory setup. If any on these conditions changes, the
>>> > preview will be invalid. Please make sure to have a minimal
>>> > timeframe between planning and applying the specs.
>>> > ################
>>> > OSDSPEC PREVIEWS
>>> > ################
>>> > Preview data is being generated.. Please re-run this command in a bit.
>>> > root@tst2-ceph01:/# ceph orch apply osd -i /spec.yml --dry-run
>>> > WARNING! Dry-Runs are snapshots of a certain point in time and are
>>> bound
>>> > to the current inventory setup. If any on these conditions changes, the
>>> > preview will be invalid. Please make sure to have a minimal
>>> > timeframe between planning and applying the specs.
>>> > ################
>>> > OSDSPEC PREVIEWS
>>> > ################
>>> > Preview data is being generated.. Please re-run this command in a bit.
>>> >
>>> >
>>> > It's seems that yml file is not read.
>>> > Any help please?
>>> >
>>> > Thank you,
>>> >
>>> > Andrea
>>> >
>>> > _______________________________________________
>>> > ceph-users mailing list -- ceph-users@ceph.io
>>> > To unsubscribe send an email to ceph-users-le...@ceph.io
>>> >
>>> >
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@ceph.io
>>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>>
>>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to