Okay, I think I may have solved my own problem. I was using the ceph Octopus 
packages that are distributed by Ubuntu. When I switched to the ones 
distributed by Ceph, the cephadm adopt was more verbose, and apparently set the 
requisite permissions. This time, after a reboot, the OSD docker container ran 
as expected.

~ Sean
On Jul 3, 2020, 9:02 AM -0500, Sean Johnson <s...@ttys0.net>, wrote:
> I have a situation were OSDs won’t work as Docker containers with Octopus on 
> an Ubuntu 2020.04 host.
>
> The cephadm adopt —style legacy —name osd.8 command works as expected, and 
> sets up the /var/lib/ceph/ directory as expected:
>
> root@balin:~# ll /var/lib/ceph/c3d06c94-bb66-4f84-bf78-470a2364b667/osd.8/
> total 64
> drwx------ 2 167 167 4096 Jul 3 08:39 ./
> drwx------ 8 167 167 4096 Jul 3 08:30 ../
> lrwxrwxrwx 1 167 167 93 Jul 3 08:39 block -> 
> /dev/ceph-590abe6f-abac-46b7-9455-80f69b63cf89/osd-block-5b229640-14e5-4e7f-9993-9368d172c30c
> -rw------- 1 167 167 37 Jul 3 08:39 ceph_fsid
> -rw------- 1 167 167 283 Jul 3 08:14 config
> -rw------- 1 167 167 37 Jul 3 08:39 fsid
> -rw------- 1 167 167 55 Jul 3 08:39 keyring
> -rw------- 1 167 167 6 Jul 3 08:39 ready
> -rw------- 1 167 167 3 May 14 21:41 require_osd_release
> -rw------- 1 167 167 10 Jul 3 08:39 type
> -rw------- 1 167 167 38 Jul 3 08:14 unit.configured
> -rw------- 1 167 167 48 May 14 21:41 unit.created
> -rw------- 1 167 167 28 Jul 3 08:14 unit.image
> -rw------- 1 167 167 825 Jul 3 08:14 unit.poststop
> -rw------- 1 167 167 2367 Jul 3 08:14 unit.run
> -rw------- 1 167 167 2 Jul 3 08:39 whoami
>
> However, the problem shows up when the container starts.
>
> Jul 03 08:39:40 balin systemd[1]: Started Ceph osd.8 for 
> c3d06c94-bb66-4f84-bf78-470a2364b667.
> Jul 03 08:39:41 balin bash[5412]: Running command: /usr/bin/chown -R 
> ceph:ceph /var/lib/ceph/osd/ceph-8
> Jul 03 08:39:41 balin bash[5412]: Running command: 
> /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev 
> /dev/ceph-590abe6f-abac-46b7-9455-80f69b63cf89/osd-blo>
> Jul 03 08:39:41 balin bash[5412]: Running command: /usr/bin/ln -snf 
> /dev/ceph-590abe6f-abac-46b7-9455-80f69b63cf89/osd-block-5b229640-14e5-4e7f-9993-9368d172c30c
>  /var/li>
> Jul 03 08:39:41 balin bash[5412]: Running command: /usr/bin/chown -h 
> ceph:ceph /var/lib/ceph/osd/ceph-8/block
> Jul 03 08:39:41 balin bash[5412]: Running command: /usr/bin/chown -R 
> ceph:ceph /dev/dm-1
> Jul 03 08:39:41 balin bash[5412]: Running command: /usr/bin/chown -R 
> ceph:ceph /var/lib/ceph/osd/ceph-8
> Jul 03 08:39:41 balin bash[5412]: --> ceph-volume lvm activate successful for 
> osd ID: 8
> Jul 03 08:39:41 balin bash[5482]: debug 2020-07-03T13:39:41.596+0000 
> 7f52b1e70f40  0 set uid:gid to 167:167 (ceph:ceph)
> Jul 03 08:39:41 balin bash[5482]: debug 2020-07-03T13:39:41.596+0000 
> 7f52b1e70f40  0 ceph version 15.2.4 
> (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable), pro>
> Jul 03 08:39:41 balin bash[5482]: debug 2020-07-03T13:39:41.596+0000 
> 7f52b1e70f40  0 pidfile_write: ignore empty --pid-file
> Jul 03 08:39:41 balin bash[5482]: debug 2020-07-03T13:39:41.600+0000 
> 7f52b1e70f40  1 bdev create path /var/lib/ceph/osd/ceph-8/block type kernel
> Jul 03 08:39:41 balin bash[5482]: debug 2020-07-03T13:39:41.600+0000 
> 7f52b1e70f40  1 bdev(0x55b992cd8000 /var/lib/ceph/osd/ceph-8/block) open path 
> /var/lib/ceph/osd/ceph>
> Jul 03 08:39:41 balin bash[5482]: debug 2020-07-03T13:39:41.600+0000 
> 7f52b1e70f40  1 bdev(0x55b992cd8000 /var/lib/ceph/osd/ceph-8/block) open size 
> 12000134430720 (0xae9f>
> Jul 03 08:39:41 balin bash[5482]: debug 2020-07-03T13:39:41.600+0000 
> 7f52b1e70f40  1 bluestore(/var/lib/ceph/osd/ceph-8) _set_cache_sizes 
> cache_size 1073741824 meta 0.4 >
> Jul 03 08:39:41 balin bash[5482]: debug 2020-07-03T13:39:41.600+0000 
> 7f52b1e70f40  1 bdev create path /var/lib/ceph/osd/ceph-8/block type kernel
> Jul 03 08:39:41 balin bash[5482]: debug 2020-07-03T13:39:41.600+0000 
> 7f52b1e70f40  1 bdev(0x55b992cd8700 /var/lib/ceph/osd/ceph-8/block) open path 
> /var/lib/ceph/osd/ceph>
> Jul 03 08:39:41 balin bash[5482]: debug 2020-07-03T13:39:41.600+0000 
> 7f52b1e70f40  1 bdev(0x55b992cd8700 /var/lib/ceph/osd/ceph-8/block) open size 
> 12000134430720 (0xae9f>
> Jul 03 08:39:41 balin bash[5482]: debug 2020-07-03T13:39:41.600+0000 
> 7f52b1e70f40  1 bluefs add_block_device bdev 1 path 
> /var/lib/ceph/osd/ceph-8/block size 11 TiB
> Jul 03 08:39:41 balin bash[5482]: debug 2020-07-03T13:39:41.600+0000 
> 7f52b1e70f40  1 bdev(0x55b992cd8700 /var/lib/ceph/osd/ceph-8/block) close
> Jul 03 08:39:41 balin bash[5482]: debug 2020-07-03T13:39:41.888+0000 
> 7f52b1e70f40  1 bdev(0x55b992cd8000 /var/lib/ceph/osd/ceph-8/block) close
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.141+0000 
> 7f52b1e70f40  0 starting osd.8 osd_data /var/lib/ceph/osd/ceph-8 
> /var/lib/ceph/osd/ceph-8/journal
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.153+0000 
> 7f52b1e70f40  0 load: jerasure load: lrc load: isa
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.153+0000 
> 7f52b1e70f40  1 bdev create path /var/lib/ceph/osd/ceph-8/block type kernel
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.153+0000 
> 7f52b1e70f40  1 bdev(0x55b992cd8000 /var/lib/ceph/osd/ceph-8/block) open path 
> /var/lib/ceph/osd/ceph>
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.153+0000 
> 7f52b1e70f40 -1 bdev(0x55b992cd8000 /var/lib/ceph/osd/ceph-8/block) open open 
> got: (13) Permission d>
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.157+0000 
> 7f52b1e70f40  0 osd.8:0.OSDShard using op scheduler 
> ClassedOpQueueScheduler(queue=WeightedPriorityQu>
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.157+0000 
> 7f52b1e70f40  0 osd.8:1.OSDShard using op scheduler 
> ClassedOpQueueScheduler(queue=WeightedPriorityQu>
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.157+0000 
> 7f52b1e70f40  0 osd.8:2.OSDShard using op scheduler 
> ClassedOpQueueScheduler(queue=WeightedPriorityQu>
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.157+0000 
> 7f52b1e70f40  0 osd.8:3.OSDShard using op scheduler 
> ClassedOpQueueScheduler(queue=WeightedPriorityQu>
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.157+0000 
> 7f52b1e70f40  0 osd.8:4.OSDShard using op scheduler 
> ClassedOpQueueScheduler(queue=WeightedPriorityQu>
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.157+0000 
> 7f52b1e70f40 -1 bluestore(/var/lib/ceph/osd/ceph-8/block) _read_bdev_label 
> failed to open /var/lib/c>
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.157+0000 
> 7f52b1e70f40  1 bluestore(/var/lib/ceph/osd/ceph-8) _mount path 
> /var/lib/ceph/osd/ceph-8
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.157+0000 
> 7f52b1e70f40 -1 bluestore(/var/lib/ceph/osd/ceph-8/block) _read_bdev_label 
> failed to open /var/lib/c>
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.157+0000 
> 7f52b1e70f40  1 bdev create path /var/lib/ceph/osd/ceph-8/block type kernel
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.157+0000 
> 7f52b1e70f40  1 bdev(0x55b992cd8000 /var/lib/ceph/osd/ceph-8/block) open path 
> /var/lib/ceph/osd/ceph>
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.157+0000 
> 7f52b1e70f40 -1 bdev(0x55b992cd8000 /var/lib/ceph/osd/ceph-8/block) open open 
> got: (13) Permission d>
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.157+0000 
> 7f52b1e70f40 -1 osd.8 0 OSD:init: unable to mount object store
> Jul 03 08:39:42 balin bash[5482]: debug 2020-07-03T13:39:42.157+0000 
> 7f52b1e70f40 -1  ** ERROR: osd init failed: (13) Permission denied
> Jul 03 08:39:42 balin systemd[1]: 
> ceph-c3d06c94-bb66-4f84-bf78-470a2364b667@osd.8.service: Main process exited, 
> code=exited, status=1/FAILURE
> Jul 03 08:39:42 balin systemd[1]: 
> ceph-c3d06c94-bb66-4f84-bf78-470a2364b667@osd.8.service: Failed with result 
> 'exit-code'.
> Jul 03 08:39:53 balin systemd[1]: 
> ceph-c3d06c94-bb66-4f84-bf78-470a2364b667@osd.8.service: Scheduled restart 
> job, restart counter is at 6.
> Jul 03 08:39:53 balin systemd[1]: Stopped Ceph osd.8 for 
> c3d06c94-bb66-4f84-bf78-470a2364b667.
> Jul 03 08:39:53 balin systemd[1]: 
> ceph-c3d06c94-bb66-4f84-bf78-470a2364b667@osd.8.service: Start request 
> repeated too quickly.
> Jul 03 08:39:53 balin systemd[1]: 
> ceph-c3d06c94-bb66-4f84-bf78-470a2364b667@osd.8.service: Failed with result 
> 'exit-code'.
> Jul 03 08:39:53 balin systemd[1]: Failed to start Ceph osd.8 for 
> c3d06c94-bb66-4f84-bf78-470a2364b667.
>
> There’s a permission denied issue with the block, and I can’t find what might 
> be causing that. If I set up an OSD fresh, it works fine with the docker 
> container until I reboot, and then I get the same permission denied error. If 
> I use ceph-volume lvm activate a standard ceph systemd service is created, 
> and the OSD comes online without any problem, though it’s not visible to ceph 
> orch.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to