Hi all,
I am following the "quick-ceph-deploy" tutorial [1] and I am getting a
 error when running the "ceph-deploy osd activate" and I am getting an
exception. See below[2].
I am following the quick tutorial step by step, except that
any help greatly appreciate
"ceph-deploy mon create-initial" does not seem to gather the keys and I
have to execute

manually with

ceph-deploy gatherkeys node01

I am following the same configuration with
- one admin node (myhost)
- 1 monitoring node (node01)
- 2 osd (node02, node03)


I am in Ubuntu Server 12.04 LTS (precise) and using ceph "emperor"


Any help greatly appreciated

Many thanks

Best regards

/V

[1] http://ceph.com/docs/master/start/quick-ceph-deploy/
[2] Error:
ceph@myhost:~/cluster$ ceph-deploy osd activate node02:/var/local/osd0
node03:/var/local/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.0): /usr/bin/ceph-deploy osd
activate node02:/var/local/osd0 node03:/var/local/osd1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
node02:/var/local/osd0: node03:/var/local/osd1:
[node02][DEBUG ] connected to host: node02
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
[ceph_deploy.osd][DEBUG ] activating host node02 disk /var/local/osd0
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[node02][INFO  ] Running command: sudo ceph-disk-activate --mark-init
upstart --mount /var/local/osd0
[node02][WARNIN] got latest monmap
[node02][WARNIN] 2014-04-30 19:36:30.268882 7f506fd07780 -1 journal
FileJournal::_open: disabling aio for non-block journal.  Use
journal_force_aio to force use of aio anyway
[node02][WARNIN] 2014-04-30 19:36:30.298239 7f506fd07780 -1 journal
FileJournal::_open: disabling aio for non-block journal.  Use
journal_force_aio to force use of aio anyway
[node02][WARNIN] 2014-04-30 19:36:30.301091 7f506fd07780 -1
filestore(/var/local/osd0) could not find 23c2fcde/osd_superblock/0//-1 in
index: (2) No such file or directory
[node02][WARNIN] 2014-04-30 19:36:30.307474 7f506fd07780 -1 created object
store /var/local/osd0 journal /var/local/osd0/journal for osd.0 fsid
76de3b72-44e3-47eb-8bd7-2b5b6e3666eb
[node02][WARNIN] 2014-04-30 19:36:30.307512 7f506fd07780 -1 auth: error
reading file: /var/local/osd0/keyring: can't open /var/local/osd0/keyring:
(2) No such file or directory
[node02][WARNIN] 2014-04-30 19:36:30.307547 7f506fd07780 -1 created new key
in keyring /var/local/osd0/keyring
[node02][WARNIN] added key for osd.0
Traceback (most recent call last):
  File "/usr/bin/ceph-deploy", line 21, in <module>
    sys.exit(main())
  File "/usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py",
line 62, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python2.7/dist-packages/ceph_deploy/cli.py", line 147, in
main
    return args.func(args)
  File "/usr/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 532, in
osd
    activate(args, cfg)
  File "/usr/lib/python2.7/dist-packages/ceph_deploy/osd.py", line 338, in
activate
    catch_osd_errors(distro.conn, distro.logger, args)
AttributeError: 'module' object has no attribute 'logger'
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to