Hi Adrian,

If you’re just using this for test/familiarity and performance isn’t an issue, 
then I’d create 3 x VMs on your host server and use them for Ceph.

It’ll work fine, just don’t expect Gb/s in transfer speeds 😊

A>

Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10

From: Adrian Sevcenco<mailto:adrian.sevce...@cern.ch>
Sent: 12 March 2021 11:22
To: ceph-users@ceph.io<mailto:ceph-users@ceph.io>
Subject: [ceph-users] Re: ceph boostrap initialization :: nvme drives not empty 
after >12h

On 3/12/21 12:31 PM, Eneko Lacunza wrote:
> Hi Adrian,
Hi!

> El 12/3/21 a las 11:26, Adrian Sevcenco escribió:
>> Hi! yesterday i bootstrapped (with cephadm) my first ceph installation
>> and things looked somehow ok .. but today the osds are not yet ready
>> and i have in dashboard this warnings:
>> MDS_SLOW_METADATA_IO: 1 MDSs report slow metadata IOs
>> PG_AVAILABILITY: Reduced data availability: 64 pgs inactive
>> PG_DEGRADED: Degraded data redundancy: 2/14 objects degraded
>> (14.286%), 66 pgs undersized
>> TOO_FEW_OSDS: OSD count 2 < osd_pool_default_size 3
>
> This is the issue. You only have 2 OSDs, but the pool default size is 3.
it should not as i changed the values:
ceph osd pool ls detail
pool 1 'NVME' replicated size 2 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 128 pgp_num 1 pgp_num_target 128 autoscale_mode on
last_change 69 lfor 0/0/54 flags hashpspool,selfmanaged_snaps
stripe_width 0 pg_num_min 64 application cephfs,rbd
pool 2 'device_health_metrics' replicated size 2 min_size 1 crush_rule 0
object_hash rjenkins pg_num 2 pgp_num 1 pgp_num_target 2 autoscale_mode
on last_change 76 lfor 0/0/60 flags hashpspool stripe_width 0 pg_num_min
2 application mgr_devicehealth
pool 3 'cephfs.sev-ceph.meta' replicated size 2 min_size 1 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
77 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16
recovery_priority 5 application cephfs
pool 4 'cephfs.sev-ceph.data' replicated size 2 min_size 1 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change
79 flags hashpspool stripe_width 0 application cephfs

>>
>> and in logs:
>> 3/12/21 12:18:19 PM
>> [INF]
>> OSD <1> is not empty yet. Waiting a bit more
>>
>> 3/12/21 12:18:19 PM
>> [INF]
>> OSD <0> is not empty yet. Waiting a bit more
>>
>> 3/12/21 12:18:19 PM
>> [INF]
>> Can't even stop one OSD. Cluster is probably busy. Retrying later..
>>
>> 3/12/21 12:18:19 PM
>> [ERR]
>> cmd: osd ok-to-stop failed with: 31 PGs are already too degraded,
>> would become too degraded or might become unavailable. (errno:-16)
>>
>> this is a single node, whole package ceph install with 2 local nvme
>> drives as osds (to be used 2x replicated like a raid1 array)
>>
>> So, can anyone tell me what is going on?
> I don't think you should use Ceph for this config. The bare minimum you
> should use is 3 nodes, because default failure domain is host.
ooooh ... how can i change this to device?

> Maybe you can explain what your goal is, so people can recommend setups.
so, this is my first encounter with ceph so i just want to have a single
node installation of ceph, so i could get familiar with both server
administration and with client rbd and mds usage

Thank you!
Adrian

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to