Re: [ceph-users] "no valid command found" when running "ceph-deploy osd create"

2018-09-02 Thread Alfredo Deza
There should be useful logs from ceph-volume in
/var/log/ceph/ceph-volume.log that might show a bit more here.

I would also try the command that fails directly on the server (sans
ceph-deploy) to see what is it that is actually failing. Seems like
the ceph-deploy log output is a bit out of order (some race condition
here maybe)


On Sun, Sep 2, 2018 at 2:53 AM, David Wahler  wrote:
> Hi all,
>
> I'm attempting to get a small Mimic cluster running on ARM, starting
> with a single node. Since there don't seem to be any Debian ARM64
> packages in the official Ceph repository, I had to build from source,
> which was fairly straightforward.
>
> After installing the .deb packages that I built and following the
> quick-start guide
> (http://docs.ceph.com/docs/mimic/start/quick-ceph-deploy/), things
> seemed to be working fine at first, but I got this error when
> attempting to create an OSD:
>
> rock64@rockpro64-1:~/my-cluster$ ceph-deploy osd create --data
> /dev/storage/bluestore rockpro64-1
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /home/rock64/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd
> create --data /dev/storage/bluestore rockpro64-1
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
> [ceph_deploy.cli][INFO  ]  verbose   : False
> [ceph_deploy.cli][INFO  ]  bluestore : None
> [ceph_deploy.cli][INFO  ]  cd_conf   :
> 
> [ceph_deploy.cli][INFO  ]  cluster   : ceph
> [ceph_deploy.cli][INFO  ]  fs_type   : xfs
> [ceph_deploy.cli][INFO  ]  block_wal : None
> [ceph_deploy.cli][INFO  ]  default_release   : False
> [ceph_deploy.cli][INFO  ]  username  : None
> [ceph_deploy.cli][INFO  ]  journal   : None
> [ceph_deploy.cli][INFO  ]  subcommand: create
> [ceph_deploy.cli][INFO  ]  host  : rockpro64-1
> [ceph_deploy.cli][INFO  ]  filestore : None
> [ceph_deploy.cli][INFO  ]  func  :  osd at 0x7fa9ca0c80>
> [ceph_deploy.cli][INFO  ]  ceph_conf : None
> [ceph_deploy.cli][INFO  ]  zap_disk  : False
> [ceph_deploy.cli][INFO  ]  data  :
> /dev/storage/bluestore
> [ceph_deploy.cli][INFO  ]  block_db  : None
> [ceph_deploy.cli][INFO  ]  dmcrypt   : False
> [ceph_deploy.cli][INFO  ]  overwrite_conf: False
> [ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   :
> /etc/ceph/dmcrypt-keys
> [ceph_deploy.cli][INFO  ]  quiet : False
> [ceph_deploy.cli][INFO  ]  debug : False
> [ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data
> device /dev/storage/bluestore
> [rockpro64-1][DEBUG ] connection detected need for sudo
> [rockpro64-1][DEBUG ] connected to host: rockpro64-1
> [rockpro64-1][DEBUG ] detect platform information from remote host
> [rockpro64-1][DEBUG ] detect machine type
> [rockpro64-1][DEBUG ] find the location of an executable
> [ceph_deploy.osd][INFO  ] Distro info: debian buster/sid sid
> [ceph_deploy.osd][DEBUG ] Deploying osd to rockpro64-1
> [rockpro64-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
> [rockpro64-1][WARNIN] osd keyring does not exist yet, creating one
> [rockpro64-1][DEBUG ] create a keyring file
> [rockpro64-1][DEBUG ] find the location of an executable
> [rockpro64-1][INFO  ] Running command: sudo /usr/sbin/ceph-volume
> --cluster ceph lvm create --bluestore --data /dev/storage/bluestore
> [rockpro64-1][DEBUG ] Running command: /usr/bin/ceph-authtool --gen-print-key
> [rockpro64-1][WARNIN] -->  RuntimeError: command returned non-zero
> exit status: 22
> [rockpro64-1][DEBUG ] Running command: /usr/bin/ceph --cluster ceph
> --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> 4903fff3-550c-4ce3-aa7d-97193627c6c0
> [rockpro64-1][DEBUG ] --> Was unable to complete a new OSD, will
> rollback changes
> [rockpro64-1][DEBUG ] Running command: /usr/bin/ceph --cluster ceph
> --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0
> --yes-i-really-mean-it
> [rockpro64-1][DEBUG ]  stderr: no valid command found; 10 closest matches:
> [rockpro64-1][DEBUG ] osd tier add-cache   
> [rockpro64-1][DEBUG ] osd tier remove-overlay 
> [rockpro64-1][DEBUG ] osd out  [...]
> [rockpro64-1][DEBUG ] osd in  [...]
> [rockpro64-1][DEBUG ] osd down  [...]
> [rockpro64-1][DEBUG ]  stderr: osd unset
> full|pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent|nosnaptrim
> [rockpro64-1][DEBUG ] osd require-osd-release luminous|mimic
> {--yes-i-really-mean-it}
> [rockpro64-1][DEBUG ] osd erasure-code-profile ls
> [rockpro64-1][DEBUG ] osd set
> full|pause|noup|nodown|noout|noin|nobackf

Re: [ceph-users] "no valid command found" when running "ceph-deploy osd create"

2018-09-02 Thread David Wahler
Ah, ceph-volume.log pointed out the actual problem:

RuntimeError: Cannot use device (/dev/storage/bluestore). A vg/lv path
or an existing device is needed

When I changed "--data /dev/storage/bluestore" to "--data
storage/bluestore", everything worked fine.

I agree that the ceph-deploy logs are a bit confusing. I submitted a
PR to add a brief note to the quick-start guide, in case anyone else
makes the same mistake: https://github.com/ceph/ceph/pull/23879

Thanks for the assistance!

-- David

On Sun, Sep 2, 2018 at 7:44 AM Alfredo Deza  wrote:
>
> There should be useful logs from ceph-volume in
> /var/log/ceph/ceph-volume.log that might show a bit more here.
>
> I would also try the command that fails directly on the server (sans
> ceph-deploy) to see what is it that is actually failing. Seems like
> the ceph-deploy log output is a bit out of order (some race condition
> here maybe)
>
>
> On Sun, Sep 2, 2018 at 2:53 AM, David Wahler  wrote:
> > Hi all,
> >
> > I'm attempting to get a small Mimic cluster running on ARM, starting
> > with a single node. Since there don't seem to be any Debian ARM64
> > packages in the official Ceph repository, I had to build from source,
> > which was fairly straightforward.
> >
> > After installing the .deb packages that I built and following the
> > quick-start guide
> > (http://docs.ceph.com/docs/mimic/start/quick-ceph-deploy/), things
> > seemed to be working fine at first, but I got this error when
> > attempting to create an OSD:
> >
> > rock64@rockpro64-1:~/my-cluster$ ceph-deploy osd create --data
> > /dev/storage/bluestore rockpro64-1
> > [ceph_deploy.conf][DEBUG ] found configuration file at:
> > /home/rock64/.cephdeploy.conf
> > [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd
> > create --data /dev/storage/bluestore rockpro64-1
> > [ceph_deploy.cli][INFO  ] ceph-deploy options:
> > [ceph_deploy.cli][INFO  ]  verbose   : False
> > [ceph_deploy.cli][INFO  ]  bluestore : None
> > [ceph_deploy.cli][INFO  ]  cd_conf   :
> > 
> > [ceph_deploy.cli][INFO  ]  cluster   : ceph
> > [ceph_deploy.cli][INFO  ]  fs_type   : xfs
> > [ceph_deploy.cli][INFO  ]  block_wal : None
> > [ceph_deploy.cli][INFO  ]  default_release   : False
> > [ceph_deploy.cli][INFO  ]  username  : None
> > [ceph_deploy.cli][INFO  ]  journal   : None
> > [ceph_deploy.cli][INFO  ]  subcommand: create
> > [ceph_deploy.cli][INFO  ]  host  : rockpro64-1
> > [ceph_deploy.cli][INFO  ]  filestore : None
> > [ceph_deploy.cli][INFO  ]  func  :  > osd at 0x7fa9ca0c80>
> > [ceph_deploy.cli][INFO  ]  ceph_conf : None
> > [ceph_deploy.cli][INFO  ]  zap_disk  : False
> > [ceph_deploy.cli][INFO  ]  data  :
> > /dev/storage/bluestore
> > [ceph_deploy.cli][INFO  ]  block_db  : None
> > [ceph_deploy.cli][INFO  ]  dmcrypt   : False
> > [ceph_deploy.cli][INFO  ]  overwrite_conf: False
> > [ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   :
> > /etc/ceph/dmcrypt-keys
> > [ceph_deploy.cli][INFO  ]  quiet : False
> > [ceph_deploy.cli][INFO  ]  debug : False
> > [ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data
> > device /dev/storage/bluestore
> > [rockpro64-1][DEBUG ] connection detected need for sudo
> > [rockpro64-1][DEBUG ] connected to host: rockpro64-1
> > [rockpro64-1][DEBUG ] detect platform information from remote host
> > [rockpro64-1][DEBUG ] detect machine type
> > [rockpro64-1][DEBUG ] find the location of an executable
> > [ceph_deploy.osd][INFO  ] Distro info: debian buster/sid sid
> > [ceph_deploy.osd][DEBUG ] Deploying osd to rockpro64-1
> > [rockpro64-1][DEBUG ] write cluster configuration to 
> > /etc/ceph/{cluster}.conf
> > [rockpro64-1][WARNIN] osd keyring does not exist yet, creating one
> > [rockpro64-1][DEBUG ] create a keyring file
> > [rockpro64-1][DEBUG ] find the location of an executable
> > [rockpro64-1][INFO  ] Running command: sudo /usr/sbin/ceph-volume
> > --cluster ceph lvm create --bluestore --data /dev/storage/bluestore
> > [rockpro64-1][DEBUG ] Running command: /usr/bin/ceph-authtool 
> > --gen-print-key
> > [rockpro64-1][WARNIN] -->  RuntimeError: command returned non-zero
> > exit status: 22
> > [rockpro64-1][DEBUG ] Running command: /usr/bin/ceph --cluster ceph
> > --name client.bootstrap-osd --keyring
> > /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> > 4903fff3-550c-4ce3-aa7d-97193627c6c0
> > [rockpro64-1][DEBUG ] --> Was unable to complete a new OSD, will
> > rollback changes
> > [rockpro64-1][DEBUG ] Running command: /usr/bin/ceph --cluster ceph
> > --name client.bootstrap-osd --keyring
> > /var/lib/ceph/bootst

Re: [ceph-users] "no valid command found" when running "ceph-deploy osd create"

2018-09-02 Thread Alfredo Deza
On Sun, Sep 2, 2018 at 12:00 PM, David Wahler  wrote:
> Ah, ceph-volume.log pointed out the actual problem:
>
> RuntimeError: Cannot use device (/dev/storage/bluestore). A vg/lv path
> or an existing device is needed

That is odd, is it possible that the error log wasn't the one that
matched what you saw on ceph-deploy's end?

Usually ceph-deploy will just receive whatever ceph-volume produced.
>
> When I changed "--data /dev/storage/bluestore" to "--data
> storage/bluestore", everything worked fine.
>
> I agree that the ceph-deploy logs are a bit confusing. I submitted a
> PR to add a brief note to the quick-start guide, in case anyone else
> makes the same mistake: https://github.com/ceph/ceph/pull/23879
>
Thanks for the PR!

> Thanks for the assistance!
>
> -- David
>
> On Sun, Sep 2, 2018 at 7:44 AM Alfredo Deza  wrote:
>>
>> There should be useful logs from ceph-volume in
>> /var/log/ceph/ceph-volume.log that might show a bit more here.
>>
>> I would also try the command that fails directly on the server (sans
>> ceph-deploy) to see what is it that is actually failing. Seems like
>> the ceph-deploy log output is a bit out of order (some race condition
>> here maybe)
>>
>>
>> On Sun, Sep 2, 2018 at 2:53 AM, David Wahler  wrote:
>> > Hi all,
>> >
>> > I'm attempting to get a small Mimic cluster running on ARM, starting
>> > with a single node. Since there don't seem to be any Debian ARM64
>> > packages in the official Ceph repository, I had to build from source,
>> > which was fairly straightforward.
>> >
>> > After installing the .deb packages that I built and following the
>> > quick-start guide
>> > (http://docs.ceph.com/docs/mimic/start/quick-ceph-deploy/), things
>> > seemed to be working fine at first, but I got this error when
>> > attempting to create an OSD:
>> >
>> > rock64@rockpro64-1:~/my-cluster$ ceph-deploy osd create --data
>> > /dev/storage/bluestore rockpro64-1
>> > [ceph_deploy.conf][DEBUG ] found configuration file at:
>> > /home/rock64/.cephdeploy.conf
>> > [ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd
>> > create --data /dev/storage/bluestore rockpro64-1
>> > [ceph_deploy.cli][INFO  ] ceph-deploy options:
>> > [ceph_deploy.cli][INFO  ]  verbose   : False
>> > [ceph_deploy.cli][INFO  ]  bluestore : None
>> > [ceph_deploy.cli][INFO  ]  cd_conf   :
>> > 
>> > [ceph_deploy.cli][INFO  ]  cluster   : ceph
>> > [ceph_deploy.cli][INFO  ]  fs_type   : xfs
>> > [ceph_deploy.cli][INFO  ]  block_wal : None
>> > [ceph_deploy.cli][INFO  ]  default_release   : False
>> > [ceph_deploy.cli][INFO  ]  username  : None
>> > [ceph_deploy.cli][INFO  ]  journal   : None
>> > [ceph_deploy.cli][INFO  ]  subcommand: create
>> > [ceph_deploy.cli][INFO  ]  host  : rockpro64-1
>> > [ceph_deploy.cli][INFO  ]  filestore : None
>> > [ceph_deploy.cli][INFO  ]  func  : > > osd at 0x7fa9ca0c80>
>> > [ceph_deploy.cli][INFO  ]  ceph_conf : None
>> > [ceph_deploy.cli][INFO  ]  zap_disk  : False
>> > [ceph_deploy.cli][INFO  ]  data  :
>> > /dev/storage/bluestore
>> > [ceph_deploy.cli][INFO  ]  block_db  : None
>> > [ceph_deploy.cli][INFO  ]  dmcrypt   : False
>> > [ceph_deploy.cli][INFO  ]  overwrite_conf: False
>> > [ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   :
>> > /etc/ceph/dmcrypt-keys
>> > [ceph_deploy.cli][INFO  ]  quiet : False
>> > [ceph_deploy.cli][INFO  ]  debug : False
>> > [ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data
>> > device /dev/storage/bluestore
>> > [rockpro64-1][DEBUG ] connection detected need for sudo
>> > [rockpro64-1][DEBUG ] connected to host: rockpro64-1
>> > [rockpro64-1][DEBUG ] detect platform information from remote host
>> > [rockpro64-1][DEBUG ] detect machine type
>> > [rockpro64-1][DEBUG ] find the location of an executable
>> > [ceph_deploy.osd][INFO  ] Distro info: debian buster/sid sid
>> > [ceph_deploy.osd][DEBUG ] Deploying osd to rockpro64-1
>> > [rockpro64-1][DEBUG ] write cluster configuration to 
>> > /etc/ceph/{cluster}.conf
>> > [rockpro64-1][WARNIN] osd keyring does not exist yet, creating one
>> > [rockpro64-1][DEBUG ] create a keyring file
>> > [rockpro64-1][DEBUG ] find the location of an executable
>> > [rockpro64-1][INFO  ] Running command: sudo /usr/sbin/ceph-volume
>> > --cluster ceph lvm create --bluestore --data /dev/storage/bluestore
>> > [rockpro64-1][DEBUG ] Running command: /usr/bin/ceph-authtool 
>> > --gen-print-key
>> > [rockpro64-1][WARNIN] -->  RuntimeError: command returned non-zero
>> > exit status: 22
>> > [rockpro64-1][DEBUG ] Running command: /usr/bin/ceph --cluster ceph
>> > --n

Re: [ceph-users] "no valid command found" when running "ceph-deploy osd create"

2018-09-02 Thread David Wahler
On Sun, Sep 2, 2018 at 1:31 PM Alfredo Deza  wrote:
>
> On Sun, Sep 2, 2018 at 12:00 PM, David Wahler  wrote:
> > Ah, ceph-volume.log pointed out the actual problem:
> >
> > RuntimeError: Cannot use device (/dev/storage/bluestore). A vg/lv path
> > or an existing device is needed
>
> That is odd, is it possible that the error log wasn't the one that
> matched what you saw on ceph-deploy's end?
>
> Usually ceph-deploy will just receive whatever ceph-volume produced.

I tried again, running ceph-volume directly this time, just to see if
I had mixed anything up. It looks like ceph-deploy is correctly
reporting the output of ceph-volume. The problem is that ceph-volume
only writes the relevant error message to the log file, and not to its
stdout/stderr.

Console output:

rock64@rockpro64-1:~/my-cluster$ sudo ceph-volume --cluster ceph lvm
create --bluestore --data /dev/storage/foobar
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
e7dd6d45-b556-461c-bad1-83d98a5a1afa
--> Was unable to complete a new OSD, will rollback changes
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.1
--yes-i-really-mean-it
 stderr: no valid command found; 10 closest matches:
[...etc...]

ceph-volume.log:

[2018-09-02 18:49:21,415][ceph_volume.main][INFO  ] Running command:
ceph-volume --cluster ceph lvm create --bluestore --data
/dev/storage/foobar
[2018-09-02 18:49:21,423][ceph_volume.process][INFO  ] Running
command: /usr/bin/ceph-authtool --gen-print-key
[2018-09-02 18:49:26,664][ceph_volume.process][INFO  ] stdout
AQCxMIxb+SezJRAAGAP/HHtHLVbciSQnZ/c/qw==
[2018-09-02 18:49:26,668][ceph_volume.process][INFO  ] Running
command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
--keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
e7dd6d45-b556-461c-bad1-83d98a5a1afa
[2018-09-02 18:49:27,685][ceph_volume.process][INFO  ] stdout 1
[2018-09-02 18:49:27,686][ceph_volume.process][INFO  ] Running
command: /bin/lsblk --nodeps -P -o
NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL
/dev/storage/foobar
[2018-09-02 18:49:27,707][ceph_volume.process][INFO  ] stdout
NAME="storage-foobar" KNAME="dm-1" MAJ:MIN="253:1" FSTYPE=""
MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="100G"
STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw"
ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED=""
TYPE="lvm" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0"
PKNAME="" PARTLABEL=""
[2018-09-02 18:49:27,708][ceph_volume.process][INFO  ] Running
command: /bin/lsblk --nodeps -P -o
NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL
/dev/storage/foobar
[2018-09-02 18:49:27,720][ceph_volume.process][INFO  ] stdout
NAME="storage-foobar" KNAME="dm-1" MAJ:MIN="253:1" FSTYPE=""
MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="100G"
STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw"
ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED=""
TYPE="lvm" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0"
PKNAME="" PARTLABEL=""
[2018-09-02 18:49:27,720][ceph_volume.devices.lvm.prepare][ERROR ] lvm
prepare was unable to complete
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
line 216, in safe_prepare
self.prepare(args)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py",
line 16, in is_root
return func(*a, **kw)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
line 283, in prepare
block_lv = self.prepare_device(args.data, 'block', cluster_fsid, osd_fsid)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
line 206, in prepare_device
raise RuntimeError(' '.join(error))
RuntimeError: Cannot use device (/dev/storage/foobar). A vg/lv path or
an existing device is needed
[2018-09-02 18:49:27,722][ceph_volume.devices.lvm.prepare][INFO  ]
will rollback OSD ID creation
[2018-09-02 18:49:27,723][ceph_volume.process][INFO  ] Running
command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
--keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.1
--yes-i-really-mean-it
[2018-09-02 18:49:28,425][ceph_volume.process][INFO  ] stderr no valid
command found; 10 closest matches:
[...etc...]

-- David

> >
> > When I changed "--data /dev/storage/bluestore" to "--data
> > storage/bluestore", everything worked fine.
> >
> > I agree that the ceph-deploy logs are a bit confusing. I submitted a
> > PR to add a brief note to the quick-start guide, in case

Re: [ceph-users] "no valid command found" when running "ceph-deploy osd create"

2018-09-04 Thread Alfredo Deza
On Sun, Sep 2, 2018 at 3:01 PM, David Wahler  wrote:
> On Sun, Sep 2, 2018 at 1:31 PM Alfredo Deza  wrote:
>>
>> On Sun, Sep 2, 2018 at 12:00 PM, David Wahler  wrote:
>> > Ah, ceph-volume.log pointed out the actual problem:
>> >
>> > RuntimeError: Cannot use device (/dev/storage/bluestore). A vg/lv path
>> > or an existing device is needed
>>
>> That is odd, is it possible that the error log wasn't the one that
>> matched what you saw on ceph-deploy's end?
>>
>> Usually ceph-deploy will just receive whatever ceph-volume produced.
>
> I tried again, running ceph-volume directly this time, just to see if
> I had mixed anything up. It looks like ceph-deploy is correctly
> reporting the output of ceph-volume. The problem is that ceph-volume
> only writes the relevant error message to the log file, and not to its
> stdout/stderr.
>
> Console output:
>
> rock64@rockpro64-1:~/my-cluster$ sudo ceph-volume --cluster ceph lvm
> create --bluestore --data /dev/storage/foobar
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> e7dd6d45-b556-461c-bad1-83d98a5a1afa
> --> Was unable to complete a new OSD, will rollback changes
> Running command: /usr/bin/ceph --cluster ceph --name
> client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.1
> --yes-i-really-mean-it
>  stderr: no valid command found; 10 closest matches:
> [...etc...]
>
> ceph-volume.log:
>
> [2018-09-02 18:49:21,415][ceph_volume.main][INFO  ] Running command:
> ceph-volume --cluster ceph lvm create --bluestore --data
> /dev/storage/foobar
> [2018-09-02 18:49:21,423][ceph_volume.process][INFO  ] Running
> command: /usr/bin/ceph-authtool --gen-print-key
> [2018-09-02 18:49:26,664][ceph_volume.process][INFO  ] stdout
> AQCxMIxb+SezJRAAGAP/HHtHLVbciSQnZ/c/qw==
> [2018-09-02 18:49:26,668][ceph_volume.process][INFO  ] Running
> command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> e7dd6d45-b556-461c-bad1-83d98a5a1afa
> [2018-09-02 18:49:27,685][ceph_volume.process][INFO  ] stdout 1
> [2018-09-02 18:49:27,686][ceph_volume.process][INFO  ] Running
> command: /bin/lsblk --nodeps -P -o
> NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL
> /dev/storage/foobar
> [2018-09-02 18:49:27,707][ceph_volume.process][INFO  ] stdout
> NAME="storage-foobar" KNAME="dm-1" MAJ:MIN="253:1" FSTYPE=""
> MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="100G"
> STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw"
> ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED=""
> TYPE="lvm" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0"
> PKNAME="" PARTLABEL=""
> [2018-09-02 18:49:27,708][ceph_volume.process][INFO  ] Running
> command: /bin/lsblk --nodeps -P -o
> NAME,KNAME,MAJ:MIN,FSTYPE,MOUNTPOINT,LABEL,UUID,RO,RM,MODEL,SIZE,STATE,OWNER,GROUP,MODE,ALIGNMENT,PHY-SEC,LOG-SEC,ROTA,SCHED,TYPE,DISC-ALN,DISC-GRAN,DISC-MAX,DISC-ZERO,PKNAME,PARTLABEL
> /dev/storage/foobar
> [2018-09-02 18:49:27,720][ceph_volume.process][INFO  ] stdout
> NAME="storage-foobar" KNAME="dm-1" MAJ:MIN="253:1" FSTYPE=""
> MOUNTPOINT="" LABEL="" UUID="" RO="0" RM="0" MODEL="" SIZE="100G"
> STATE="running" OWNER="root" GROUP="disk" MODE="brw-rw"
> ALIGNMENT="0" PHY-SEC="4096" LOG-SEC="512" ROTA="1" SCHED=""
> TYPE="lvm" DISC-ALN="0" DISC-GRAN="0B" DISC-MAX="0B" DISC-ZERO="0"
> PKNAME="" PARTLABEL=""
> [2018-09-02 18:49:27,720][ceph_volume.devices.lvm.prepare][ERROR ] lvm
> prepare was unable to complete
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
> line 216, in safe_prepare
> self.prepare(args)
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py",
> line 16, in is_root
> return func(*a, **kw)
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
> line 283, in prepare
> block_lv = self.prepare_device(args.data, 'block', cluster_fsid, osd_fsid)
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
> line 206, in prepare_device
> raise RuntimeError(' '.join(error))
> RuntimeError: Cannot use device (/dev/storage/foobar). A vg/lv path or
> an existing device is needed
> [2018-09-02 18:49:27,722][ceph_volume.devices.lvm.prepare][INFO  ]
> will rollback OSD ID creation
> [2018-09-02 18:49:27,723][ceph_volume.process][INFO  ] Running
> command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.1
> --yes-i-really-mean-it
> [2018-09-02 18:49:28,425][ceph_volume.process][INFO  ] stderr no valid
> command found; 10 closest matches:
> [...etc...]

This is a bug. Thanks for di