Hi ,
    I encountered other problems when i installed ceph .
#1. When i run the command ,  " ceph-deploy new ceph-0  "  , and got
the  ceph.conf
  file . However , there is not any information about    osd pool default
size or public network .
[root@ceph-2 my-cluster]# more ceph.conf
[global]
auth_service_required = cephx
filestore_xattr_use_omap = true
auth_client_required = cephx
auth_cluster_required = cephx
mon_host = 192.168.72.33
mon_initial_members = ceph-0
fsid = 74d682b5-2bf2-464c-8462-740f96bcc525

#2.  I ignore the problem #1 , and continue to  set us the Ceph Storage
Cluster , encountered a error  , whhen run the command  ' ceph-deploy osd
activate  ceph-2:/mnt/sda ' .
I do it refer to the manual ,
http://ceph.com/docs/master/start/quick-ceph-deploy/
error message
[root@ceph-0 my-cluster]#ceph-deploy osd prepare ceph-2:/mnt/sda
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.23): /usr/bin/ceph-deploy osd
prepare ceph-2:/mnt/sda
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-2:/mnt/sda:
[ceph-2][DEBUG ] connected to host: ceph-2
[ceph-2][DEBUG ] detect platform information from remote host
[ceph-2][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-2
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-2][INFO  ] Running command: udevadm trigger --subsystem-match=block
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph-2 disk /mnt/sda journal None
activate False
[ceph-2][INFO  ] Running command: ceph-disk -v prepare --fs-type xfs
--cluster ceph -- /mnt/sda
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=fsid
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=osd_journal_size
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[ceph-2][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /mnt/sda
[ceph-2][INFO  ] checking OSD status...
[ceph-2][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-2 is now ready for osd use.
Error in sys.exitfunc:
[root@ceph-0 my-cluster]# ceph-deploy osd activate  ceph-2:/mnt/sda
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.23): /usr/bin/ceph-deploy osd
activate ceph-2:/mnt/sda
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-2:/mnt/sda:
[ceph-2][DEBUG ] connected to host: ceph-2
[ceph-2][DEBUG ] detect platform information from remote host
[ceph-2][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] activating host ceph-2 disk /mnt/sda
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph-2][INFO  ] Running command: ceph-disk -v activate --mark-init
sysvinit --mount /mnt/sda
[ceph-2][WARNIN] DEBUG:ceph-disk:Cluster uuid is
af23707d-325f-4846-bba9-b88ec953be80
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=fsid
[ceph-2][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[ceph-2][WARNIN] DEBUG:ceph-disk:OSD uuid is
ca9f6649-b4b8-46ce-a860-1d81eed4fd5e
[ceph-2][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster
ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/
                            ceph.keyring osd create --concise
ca9f6649-b4b8-46ce-a860-1d81eed4fd5e
[ceph-2][WARNIN] 2015-05-14 17:37:10.988914 7f373bd34700  0 librados:
client.bootstrap-osd authentication error (1) Operation not permitted
[ceph-2][WARNIN] Error connecting to cluster: PermissionError
[ceph-2][WARNIN] ceph-disk: Error: ceph osd create failed: Command
'/usr/bin/ceph' returned non-zero exit status 1:
[ceph-2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v
activate --mark-init sysvinit --mount /mnt/sda

Error in sys.exitfunc:

I look forward to hearing from you soon.
Best Regards!
zhongbo
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to