[ceph-users] ceph-deploy osd activate ERROR

2015-05-14 Thread 张忠波
Hi ,
I encountered other problems when i installed ceph .
#1. When i run the command ,   ceph-deploy new ceph-0, and got
the  ceph.conf
  file . However , there is not any information aboutosd pool default
size or public network .
[root@ceph-2 my-cluster]# more ceph.conf
[global]
auth_service_required = cephx
filestore_xattr_use_omap = true
auth_client_required = cephx
auth_cluster_required = cephx
mon_host = 192.168.72.33
mon_initial_members = ceph-0
fsid = 74d682b5-2bf2-464c-8462-740f96bcc525

#2.  I ignore the problem #1 , and continue to  set us the Ceph Storage
Cluster , encountered a error  , whhen run the command  ' ceph-deploy osd
activate  ceph-2:/mnt/sda ' .
I do it refer to the manual ,
http://ceph.com/docs/master/start/quick-ceph-deploy/
error message
[root@ceph-0 my-cluster]#ceph-deploy osd prepare ceph-2:/mnt/sda
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.23): /usr/bin/ceph-deploy osd
prepare ceph-2:/mnt/sda
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-2:/mnt/sda:
[ceph-2][DEBUG ] connected to host: ceph-2
[ceph-2][DEBUG ] detect platform information from remote host
[ceph-2][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-2
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-2][INFO  ] Running command: udevadm trigger --subsystem-match=block
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph-2 disk /mnt/sda journal None
activate False
[ceph-2][INFO  ] Running command: ceph-disk -v prepare --fs-type xfs
--cluster ceph -- /mnt/sda
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=fsid
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=osd_journal_size
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
--cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[ceph-2][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /mnt/sda
[ceph-2][INFO  ] checking OSD status...
[ceph-2][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-2 is now ready for osd use.
Error in sys.exitfunc:
[root@ceph-0 my-cluster]# ceph-deploy osd activate  ceph-2:/mnt/sda
[ceph_deploy.conf][DEBUG ] found configuration file at:
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.23): /usr/bin/ceph-deploy osd
activate ceph-2:/mnt/sda
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-2:/mnt/sda:
[ceph-2][DEBUG ] connected to host: ceph-2
[ceph-2][DEBUG ] detect platform information from remote host
[ceph-2][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] activating host ceph-2 disk /mnt/sda
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph-2][INFO  ] Running command: ceph-disk -v activate --mark-init
sysvinit --mount /mnt/sda
[ceph-2][WARNIN] DEBUG:ceph-disk:Cluster uuid is
af23707d-325f-4846-bba9-b88ec953be80
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=fsid
[ceph-2][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[ceph-2][WARNIN] DEBUG:ceph-disk:OSD uuid is
ca9f6649-b4b8-46ce-a860-1d81eed4fd5e
[ceph-2][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
[ceph-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster
ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/
ceph.keyring osd create --concise
ca9f6649-b4b8-46ce-a860-1d81eed4fd5e
[ceph-2][WARNIN] 2015-05-14 17:37:10.988914 7f373bd34700  0 librados:
client.bootstrap-osd authentication error (1) Operation not permitted
[ceph-2][WARNIN] Error connecting to cluster: PermissionError
[ceph-2][WARNIN] ceph-disk: Error: ceph osd create failed: Command
'/usr/bin/ceph' returned non-zero exit status 1:
[ceph-2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v
activate --mark-init sysvinit --mount /mnt/sda

Error in sys.exitfunc:

I look forward to hearing from you soon.

Re: [ceph-users] ceph-deploy osd activate error: AttributeError: 'module' object has no attribute 'logger' exception

2014-05-01 Thread Alfredo Deza
This is already marked as urgent and I am working on it. A point
release should be coming up as soon as possible.

I apologize for the bug.

The workaround would be to use 1.4.0 until the new version comes out.

On Wed, Apr 30, 2014 at 11:01 PM, Mike Dawson mike.daw...@cloudapt.com wrote:
 Victor,

 This is a verified issue reported earlier today:

 http://tracker.ceph.com/issues/8260

 Cheers,
 Mike



 On 4/30/2014 3:10 PM, Victor Bayon wrote:

 Hi all,
 I am following the quick-ceph-deploy tutorial [1] and I am getting a
   error when running the ceph-deploy osd activate and I am getting an
 exception. See below[2].
 I am following the quick tutorial step by step, except that
 any help greatly appreciate
 ceph-deploy mon create-initial does not seem to gather the keys and I
 have to execute

 manually with

 ceph-deploy gatherkeys node01

 I am following the same configuration with
 - one admin node (myhost)
 - 1 monitoring node (node01)
 - 2 osd (node02, node03)


 I am in Ubuntu Server 12.04 LTS (precise) and using ceph emperor


 Any help greatly appreciated

 Many thanks

 Best regards

 /V

 [1] http://ceph.com/docs/master/start/quick-ceph-deploy/
 [2] Error:
 ceph@myhost:~/cluster$ ceph-deploy osd activate node02:/var/local/osd0
 node03:/var/local/osd1
 [ceph_deploy.conf][DEBUG ] found configuration file at:
 /home/ceph/.cephdeploy.conf
 [ceph_deploy.cli][INFO  ] Invoked (1.5.0): /usr/bin/ceph-deploy osd
 activate node02:/var/local/osd0 node03:/var/local/osd1
 [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
 node02:/var/local/osd0: node03:/var/local/osd1:
 [node02][DEBUG ] connected to host: node02
 [node02][DEBUG ] detect platform information from remote host
 [node02][DEBUG ] detect machine type
 [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
 [ceph_deploy.osd][DEBUG ] activating host node02 disk /var/local/osd0
 [ceph_deploy.osd][DEBUG ] will use init type: upstart
 [node02][INFO  ] Running command: sudo ceph-disk-activate --mark-init
 upstart --mount /var/local/osd0
 [node02][WARNIN] got latest monmap
 [node02][WARNIN] 2014-04-30 19:36:30.268882 7f506fd07780 -1 journal
 FileJournal::_open: disabling aio for non-block journal.  Use
 journal_force_aio to force use of aio anyway
 [node02][WARNIN] 2014-04-30 19:36:30.298239 7f506fd07780 -1 journal
 FileJournal::_open: disabling aio for non-block journal.  Use
 journal_force_aio to force use of aio anyway
 [node02][WARNIN] 2014-04-30 19:36:30.301091 7f506fd07780 -1
 filestore(/var/local/osd0) could not find 23c2fcde/osd_superblock/0//-1
 in index: (2) No such file or directory
 [node02][WARNIN] 2014-04-30 19:36:30.307474 7f506fd07780 -1 created
 object store /var/local/osd0 journal /var/local/osd0/journal for osd.0
 fsid 76de3b72-44e3-47eb-8bd7-2b5b6e3666eb
 [node02][WARNIN] 2014-04-30 19:36:30.307512 7f506fd07780 -1 auth: error
 reading file: /var/local/osd0/keyring: can't open
 /var/local/osd0/keyring: (2) No such file or directory
 [node02][WARNIN] 2014-04-30 19:36:30.307547 7f506fd07780 -1 created new
 key in keyring /var/local/osd0/keyring
 [node02][WARNIN] added key for osd.0
 Traceback (most recent call last):
File /usr/bin/ceph-deploy, line 21, in module
  sys.exit(main())
File
 /usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py, line
 62, in newfunc
  return f(*a, **kw)
File /usr/lib/python2.7/dist-packages/ceph_deploy/cli.py, line 147,
 in main
  return args.func(args)
File /usr/lib/python2.7/dist-packages/ceph_deploy/osd.py, line 532,
 in osd
  activate(args, cfg)
File /usr/lib/python2.7/dist-packages/ceph_deploy/osd.py, line 338,
 in activate
  catch_osd_errors(distro.conn, distro.logger, args)
 AttributeError: 'module' object has no attribute 'logger'


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy osd activate error: AttributeError: 'module' object has no attribute 'logger' exception

2014-05-01 Thread Alfredo Deza
Victor,  ceph-deploy v1.5.1 has been released with a fix that should
take care of your problem

On Thu, May 1, 2014 at 7:32 AM, Alfredo Deza alfredo.d...@inktank.com wrote:
 This is already marked as urgent and I am working on it. A point
 release should be coming up as soon as possible.

 I apologize for the bug.

 The workaround would be to use 1.4.0 until the new version comes out.

 On Wed, Apr 30, 2014 at 11:01 PM, Mike Dawson mike.daw...@cloudapt.com 
 wrote:
 Victor,

 This is a verified issue reported earlier today:

 http://tracker.ceph.com/issues/8260

 Cheers,
 Mike



 On 4/30/2014 3:10 PM, Victor Bayon wrote:

 Hi all,
 I am following the quick-ceph-deploy tutorial [1] and I am getting a
   error when running the ceph-deploy osd activate and I am getting an
 exception. See below[2].
 I am following the quick tutorial step by step, except that
 any help greatly appreciate
 ceph-deploy mon create-initial does not seem to gather the keys and I
 have to execute

 manually with

 ceph-deploy gatherkeys node01

 I am following the same configuration with
 - one admin node (myhost)
 - 1 monitoring node (node01)
 - 2 osd (node02, node03)


 I am in Ubuntu Server 12.04 LTS (precise) and using ceph emperor


 Any help greatly appreciated

 Many thanks

 Best regards

 /V

 [1] http://ceph.com/docs/master/start/quick-ceph-deploy/
 [2] Error:
 ceph@myhost:~/cluster$ ceph-deploy osd activate node02:/var/local/osd0
 node03:/var/local/osd1
 [ceph_deploy.conf][DEBUG ] found configuration file at:
 /home/ceph/.cephdeploy.conf
 [ceph_deploy.cli][INFO  ] Invoked (1.5.0): /usr/bin/ceph-deploy osd
 activate node02:/var/local/osd0 node03:/var/local/osd1
 [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
 node02:/var/local/osd0: node03:/var/local/osd1:
 [node02][DEBUG ] connected to host: node02
 [node02][DEBUG ] detect platform information from remote host
 [node02][DEBUG ] detect machine type
 [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
 [ceph_deploy.osd][DEBUG ] activating host node02 disk /var/local/osd0
 [ceph_deploy.osd][DEBUG ] will use init type: upstart
 [node02][INFO  ] Running command: sudo ceph-disk-activate --mark-init
 upstart --mount /var/local/osd0
 [node02][WARNIN] got latest monmap
 [node02][WARNIN] 2014-04-30 19:36:30.268882 7f506fd07780 -1 journal
 FileJournal::_open: disabling aio for non-block journal.  Use
 journal_force_aio to force use of aio anyway
 [node02][WARNIN] 2014-04-30 19:36:30.298239 7f506fd07780 -1 journal
 FileJournal::_open: disabling aio for non-block journal.  Use
 journal_force_aio to force use of aio anyway
 [node02][WARNIN] 2014-04-30 19:36:30.301091 7f506fd07780 -1
 filestore(/var/local/osd0) could not find 23c2fcde/osd_superblock/0//-1
 in index: (2) No such file or directory
 [node02][WARNIN] 2014-04-30 19:36:30.307474 7f506fd07780 -1 created
 object store /var/local/osd0 journal /var/local/osd0/journal for osd.0
 fsid 76de3b72-44e3-47eb-8bd7-2b5b6e3666eb
 [node02][WARNIN] 2014-04-30 19:36:30.307512 7f506fd07780 -1 auth: error
 reading file: /var/local/osd0/keyring: can't open
 /var/local/osd0/keyring: (2) No such file or directory
 [node02][WARNIN] 2014-04-30 19:36:30.307547 7f506fd07780 -1 created new
 key in keyring /var/local/osd0/keyring
 [node02][WARNIN] added key for osd.0
 Traceback (most recent call last):
File /usr/bin/ceph-deploy, line 21, in module
  sys.exit(main())
File
 /usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py, line
 62, in newfunc
  return f(*a, **kw)
File /usr/lib/python2.7/dist-packages/ceph_deploy/cli.py, line 147,
 in main
  return args.func(args)
File /usr/lib/python2.7/dist-packages/ceph_deploy/osd.py, line 532,
 in osd
  activate(args, cfg)
File /usr/lib/python2.7/dist-packages/ceph_deploy/osd.py, line 338,
 in activate
  catch_osd_errors(distro.conn, distro.logger, args)
 AttributeError: 'module' object has no attribute 'logger'


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-deploy osd activate error: AttributeError: 'module' object has no attribute 'logger' exception

2014-04-30 Thread Victor Bayon
Hi all,
I am following the quick-ceph-deploy tutorial [1] and I am getting a
 error when running the ceph-deploy osd activate and I am getting an
exception. See below[2].
I am following the quick tutorial step by step, except that
any help greatly appreciate
ceph-deploy mon create-initial does not seem to gather the keys and I
have to execute

manually with

ceph-deploy gatherkeys node01

I am following the same configuration with
- one admin node (myhost)
- 1 monitoring node (node01)
- 2 osd (node02, node03)


I am in Ubuntu Server 12.04 LTS (precise) and using ceph emperor


Any help greatly appreciated

Many thanks

Best regards

/V

[1] http://ceph.com/docs/master/start/quick-ceph-deploy/
[2] Error:
ceph@myhost:~/cluster$ ceph-deploy osd activate node02:/var/local/osd0
node03:/var/local/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.0): /usr/bin/ceph-deploy osd
activate node02:/var/local/osd0 node03:/var/local/osd1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
node02:/var/local/osd0: node03:/var/local/osd1:
[node02][DEBUG ] connected to host: node02
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
[ceph_deploy.osd][DEBUG ] activating host node02 disk /var/local/osd0
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[node02][INFO  ] Running command: sudo ceph-disk-activate --mark-init
upstart --mount /var/local/osd0
[node02][WARNIN] got latest monmap
[node02][WARNIN] 2014-04-30 19:36:30.268882 7f506fd07780 -1 journal
FileJournal::_open: disabling aio for non-block journal.  Use
journal_force_aio to force use of aio anyway
[node02][WARNIN] 2014-04-30 19:36:30.298239 7f506fd07780 -1 journal
FileJournal::_open: disabling aio for non-block journal.  Use
journal_force_aio to force use of aio anyway
[node02][WARNIN] 2014-04-30 19:36:30.301091 7f506fd07780 -1
filestore(/var/local/osd0) could not find 23c2fcde/osd_superblock/0//-1 in
index: (2) No such file or directory
[node02][WARNIN] 2014-04-30 19:36:30.307474 7f506fd07780 -1 created object
store /var/local/osd0 journal /var/local/osd0/journal for osd.0 fsid
76de3b72-44e3-47eb-8bd7-2b5b6e3666eb
[node02][WARNIN] 2014-04-30 19:36:30.307512 7f506fd07780 -1 auth: error
reading file: /var/local/osd0/keyring: can't open /var/local/osd0/keyring:
(2) No such file or directory
[node02][WARNIN] 2014-04-30 19:36:30.307547 7f506fd07780 -1 created new key
in keyring /var/local/osd0/keyring
[node02][WARNIN] added key for osd.0
Traceback (most recent call last):
  File /usr/bin/ceph-deploy, line 21, in module
sys.exit(main())
  File /usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py,
line 62, in newfunc
return f(*a, **kw)
  File /usr/lib/python2.7/dist-packages/ceph_deploy/cli.py, line 147, in
main
return args.func(args)
  File /usr/lib/python2.7/dist-packages/ceph_deploy/osd.py, line 532, in
osd
activate(args, cfg)
  File /usr/lib/python2.7/dist-packages/ceph_deploy/osd.py, line 338, in
activate
catch_osd_errors(distro.conn, distro.logger, args)
AttributeError: 'module' object has no attribute 'logger'
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy osd activate error: AttributeError: 'module' object has no attribute 'logger' exception

2014-04-30 Thread Mike Dawson

Victor,

This is a verified issue reported earlier today:

http://tracker.ceph.com/issues/8260

Cheers,
Mike


On 4/30/2014 3:10 PM, Victor Bayon wrote:

Hi all,
I am following the quick-ceph-deploy tutorial [1] and I am getting a
  error when running the ceph-deploy osd activate and I am getting an
exception. See below[2].
I am following the quick tutorial step by step, except that
any help greatly appreciate
ceph-deploy mon create-initial does not seem to gather the keys and I
have to execute

manually with

ceph-deploy gatherkeys node01

I am following the same configuration with
- one admin node (myhost)
- 1 monitoring node (node01)
- 2 osd (node02, node03)


I am in Ubuntu Server 12.04 LTS (precise) and using ceph emperor


Any help greatly appreciated

Many thanks

Best regards

/V

[1] http://ceph.com/docs/master/start/quick-ceph-deploy/
[2] Error:
ceph@myhost:~/cluster$ ceph-deploy osd activate node02:/var/local/osd0
node03:/var/local/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.0): /usr/bin/ceph-deploy osd
activate node02:/var/local/osd0 node03:/var/local/osd1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
node02:/var/local/osd0: node03:/var/local/osd1:
[node02][DEBUG ] connected to host: node02
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
[ceph_deploy.osd][DEBUG ] activating host node02 disk /var/local/osd0
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[node02][INFO  ] Running command: sudo ceph-disk-activate --mark-init
upstart --mount /var/local/osd0
[node02][WARNIN] got latest monmap
[node02][WARNIN] 2014-04-30 19:36:30.268882 7f506fd07780 -1 journal
FileJournal::_open: disabling aio for non-block journal.  Use
journal_force_aio to force use of aio anyway
[node02][WARNIN] 2014-04-30 19:36:30.298239 7f506fd07780 -1 journal
FileJournal::_open: disabling aio for non-block journal.  Use
journal_force_aio to force use of aio anyway
[node02][WARNIN] 2014-04-30 19:36:30.301091 7f506fd07780 -1
filestore(/var/local/osd0) could not find 23c2fcde/osd_superblock/0//-1
in index: (2) No such file or directory
[node02][WARNIN] 2014-04-30 19:36:30.307474 7f506fd07780 -1 created
object store /var/local/osd0 journal /var/local/osd0/journal for osd.0
fsid 76de3b72-44e3-47eb-8bd7-2b5b6e3666eb
[node02][WARNIN] 2014-04-30 19:36:30.307512 7f506fd07780 -1 auth: error
reading file: /var/local/osd0/keyring: can't open
/var/local/osd0/keyring: (2) No such file or directory
[node02][WARNIN] 2014-04-30 19:36:30.307547 7f506fd07780 -1 created new
key in keyring /var/local/osd0/keyring
[node02][WARNIN] added key for osd.0
Traceback (most recent call last):
   File /usr/bin/ceph-deploy, line 21, in module
 sys.exit(main())
   File
/usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py, line
62, in newfunc
 return f(*a, **kw)
   File /usr/lib/python2.7/dist-packages/ceph_deploy/cli.py, line 147,
in main
 return args.func(args)
   File /usr/lib/python2.7/dist-packages/ceph_deploy/osd.py, line 532,
in osd
 activate(args, cfg)
   File /usr/lib/python2.7/dist-packages/ceph_deploy/osd.py, line 338,
in activate
 catch_osd_errors(distro.conn, distro.logger, args)
AttributeError: 'module' object has no attribute 'logger'


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com