Thanks Yury,

ceph-volume always listed these devices as available. But ceph orch does not. 
They do not seem to exist for ceph orch.
Also adding them manually does not help (I’ve tried that before and now again):

root@terraformdemo:~# ceph orch daemon add osd 192.168.72.10:/dev/sdc
root@terraformdemo:~# ceph orch daemon add osd 192.168.72.10:/dev/sde
root@terraformdemo:~# ceph status
 cluster:
    id:     655a7a32-3bbf-11ec-920e-000c29da2e6a
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 1

  services:
    mon: 1 daemons, quorum terraformdemo (age 4m)
    mgr: terraformdemo.aylzbb(active, since 4m)
    osd: 0 osds: 0 up, 0 in (since 6d)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

before I did that I rebooted the VM. I’ve also tried with the hostname instead 
of the IP – no difference…
Also it’s quite irritating that there are no error messages…

--
Carsten


From: Yury Kirsanov <y.kirsa...@gmail.com>
Sent: Dienstag, 9. November 2021 14:05
To: Scharfenberg, Carsten <c.scharfenb...@francotyp.com>
Cc: Сергей Процун <proserge...@gmail.com>; Zach Heise <he...@ssc.wisc.edu>; 
ceph-users <ceph-users@ceph.io>
Subject: Re: [ceph-users] Re: fresh pacific installation does not detect 
available disks

Try to do:

ceph orch daemon add osd *<host>*:/dev/sdc

And then

ceph orch daemon add osd *<host>*:/dev/sde

This should succeed as sdc and sde are both marked as available at the moment. 
Hope this helps!

Regards,
Yury.

On Wed, Nov 10, 2021 at 12:01 AM Yury Kirsanov 
<y.kirsa...@gmail.com<mailto:y.kirsa...@gmail.com>> wrote:
By the way, /dev/sdc is now listed as available:

Device Path               Size         rotates available Model name
/dev/sdc                  20.00 GB     True    True      VMware Virtual S
/dev/sde                  20.00 GB     True    True      VMware Virtual S


On Tue, Nov 9, 2021 at 11:23 PM Scharfenberg, Carsten 
<c.scharfenb...@francotyp.com<mailto:c.scharfenb...@francotyp.com>> wrote:
Thanks for your support, guys.

Unfortunately I do not know the tool sgdisk. It’s also not available from the 
standard Debian package repository.
So I’ve tried out Yury’s approach to use dd… without success:

root@terraformdemo:~# dd if=/dev/zero of=/dev/sdc bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.05943 s, 1.0 GB/s
root@terraformdemo:~# dd if=/dev/zero of=/dev/sde bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.618606 s, 1.7 GB/s
root@terraformdemo:~# ceph-volume inventory

Device Path               Size         rotates available Model name
/dev/sdc                  20.00 GB     True    True      VMware Virtual S
/dev/sde                  20.00 GB     True    True      VMware Virtual S
/dev/sda                  20.00 GB     True    False     VMware Virtual S
/dev/sdb                  20.00 GB     True    False     VMware Virtual S
/dev/sdd                  20.00 GB     True    False     VMware Virtual S
root@terraformdemo:~# ceph orch device ls
root@terraformdemo:~#

Do you have any other ideas? Could it be that ceph is not usable with this kind 
of virtual harddisk?

--
Carsten

From: Сергей Процун <proserge...@gmail.com<mailto:proserge...@gmail.com>>
Sent: Donnerstag, 4. November 2021 21:36
To: Yury Kirsanov <y.kirsa...@gmail.com<mailto:y.kirsa...@gmail.com>>
Cc: Zach Heise <he...@ssc.wisc.edu<mailto:he...@ssc.wisc.edu>>; Scharfenberg, 
Carsten <c.scharfenb...@francotyp.com<mailto:c.scharfenb...@francotyp.com>>; 
ceph-users <ceph-users@ceph.io<mailto:ceph-users@ceph.io>>
Subject: Re: [ceph-users] Re: fresh pacific installation does not detect 
available disks

ACHTUNG: Diese Mail kommt von einem EXTERNEN ABSENDER - bitte VORSICHT beim 
Öffnen von ANHÄNGEN und im Umgang mit LINKS.
Hello,

I agree with that point. When ceph creates lvm volumes it adds lvm tags to 
them. Thats how ceph finds that those they are occupied by ceph. So you should 
remove lvm volumes and even better clean all data on those lvm volumes. Usually 
its enough to clean just the head of lvm partition where it stores information 
of the volumes itself.

---
Sergey Protsun

чт, 4 лист. 2021, 22:29 користувач Yury Kirsanov 
<y.kirsa...@gmail.com<mailto:y.kirsa...@gmail.com>> пише:
Hi,
You should erase any partitions or LVM groups on the disks and restart OSD
hosts so CEPH would be able to detect drives. I usually just do 'dd
if=/dev/zero of=/dev/<sd*> bs=1M count=1024' and then reboot host to make
sure it will definitely be clean. Or, alternatively, you can zap the
drives, or you can just remove LVM groups using pvremove or remove
patitions using fdisk.

Regards,
Yury.

On Fri, 5 Nov 2021, 07:24 Zach Heise, 
<he...@ssc.wisc.edu<mailto:he...@ssc.wisc.edu>> wrote:

> Hi Carsten,
>
> When I had problems on my physical hosts (recycled systems that we wanted
> to
> just use in a test cluster) I found that I needed to use sgdisk --zap-all
> /dev/sd{letter} to clean all partition maps off the disks before ceph would
> recognize them as available. Worth a shot in your case, even though as
> fresh
> virtual volumes they shouldn't have anything on them (yet) anyway.
>
> -----Original Message-----
> From: Scharfenberg, Carsten 
> <c.scharfenb...@francotyp.com<mailto:c.scharfenb...@francotyp.com>>
> Sent: Thursday, November 4, 2021 12:59 PM
> To: ceph-users@ceph.io<mailto:ceph-users@ceph.io>
> Subject: [ceph-users] fresh pacific installation does not detect available
> disks
>
> Hello everybody,
>
> as ceph newbie I've tried out setting up ceph pacific according to the
> official documentation: https://docs.ceph.com/en/latest/cephadm/install/
> The intention was to setup a single node "cluster" with radosgw to feature
> local S3 storage.
> This failed because my ceph "cluster" would not detect OSDs.
> I started from a Debain 11.1 (bullseye) VM hosted on VMware workstation. Of
> course I've added some additional disk images to be used as OSDs.
> These are the steps I've performed:
>
> curl --silent --remote-name --location
> https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm
> chmod +x cephadm
> ./cephadm add-repo --release pacific
> ./cephadm install
>
> apt install -y cephadm
>
> cephadm bootstrap --mon-ip <my_ip>
>
> cephadm add-repo --release pacific
>
> cephadm install ceph-common
>
> ceph orch apply osd --all-available-devices
>
>
> The last command would have no effect. Its sole output is:
>
>         Scheduled osd.all-available-devices update...
>
>
>
> Also ceph -s shows that no OSDs were added:
>
>   cluster:
>
>     id:     655a7a32-3bbf-11ec-920e-000c29da2e6a
>
>     health: HEALTH_WARN
>
>             OSD count 0 < osd_pool_default_size 1
>
>
>
>   services:
>
>     mon: 1 daemons, quorum terraformdemo (age 2d)
>
>     mgr: terraformdemo.aylzbb(active, since 2d)
>
>     osd: 0 osds: 0 up, 0 in (since 2d)
>
>
>
>   data:
>
>     pools:   0 pools, 0 pgs
>
>     objects: 0 objects, 0 B
>
>     usage:   0 B used, 0 B / 0 B avail
>
>     pgs:
>
>
> To find out what may be going wrong I've also tried out this:
>
>         cephadm install ceph-osd
>
>         ceph-volume inventory
> This results in a list that makes more sense:
>
> Device Path               Size         rotates available Model name
>
> /dev/sdc                  20.00 GB     True    True      VMware Virtual S
>
> /dev/sde                  20.00 GB     True    True      VMware Virtual S
>
> /dev/sda                  20.00 GB     True    False     VMware Virtual S
>
> /dev/sdb                  20.00 GB     True    False     VMware Virtual S
>
> /dev/sdd                  20.00 GB     True    False     VMware Virtual S
>
>
> So how can I convince cephadm to use the available devices?
>
> Regards,
> Carsten
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io> To 
> unsubscribe send an email
> to ceph-users-le...@ceph.io<mailto:ceph-users-le...@ceph.io>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
> To unsubscribe send an email to 
> ceph-users-le...@ceph.io<mailto:ceph-users-le...@ceph.io>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to 
ceph-users-le...@ceph.io<mailto:ceph-users-le...@ceph.io>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to