Hi All,

I have been rolling out an infernarlis cluster, however I get stuck on the 
ceph-disk prepare stage.

I am deploying ceph via ansible along with a whole load of other software.

Log output at the end of the message but the solution is to copy the 
"/lib/systemd/system/ceph-osd@.service” file to 
"/etc/systemd/system/ceph-osd@<osd_num>.service” in the ceph disk script.

Without doing this I need to somehow grab the number of the osd that is going 
to be created and copy the file before ceph-disk runs: "'/usr/bin/systemctl', 
'enable', 'ceph-osd@<osd_num>’”

Whilst this works ok if I am doing everything in serial, the ideal is to be 
able to spin up a cluster asynchronously.


Unless there’s something I’m missing?



Cheers,
Bryn

Log lines from ansible below.


failed: [tapir5.eng.velocix.com -> 127.0.0.1] => (item=tapir5.eng.velocix.com) 
=> {"changed": false, "cmd": ["ssh", "r...@tapir5.eng.velocix.com", 
"/tmp/ceph_disk.sh 5 ceph 749cee00-c818-4abc-90ee-7b1193c2a8b9"], "delta": 
"0:00:24.432348", "end": "2015-10-30 16:36:03.214603", "failed": true, 
"failed_when_result": true, "item": "tapir5.eng.velocix.com", "rc": 1, "start": 
"2015-10-30 16:35:38.782255", "stdout_lines": ["/dev/sdb1 on 
/var/lib/ceph/osd/ceph-0 type xfs 
(rw,noatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)", 
"/dev/sdc1 on /var/lib/ceph/osd/ceph-2 type xfs 
(rw,noatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)", 
"/dev/sdd1 on /var/lib/ceph/osd/ceph-4 type xfs 
(rw,noatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)", 
"Creating new GPT entries.", "GPT data structures destroyed! You may now 
partition the disk using fdisk or", "other utilities.", "Creating new GPT 
entries.", "The operation has completed successfully.", "The operation has 
completed successfully.", "The operation has completed successfully.", 
"meta-data=/dev/sde1              isize=2048   agcount=32, agsize=45744064 
blks", "         =                       sectsz=512   attr=2, projid32bit=1", " 
        =                       crc=0        finobt=0", "data     =             
          bsize=4096   blocks=1463810048, imaxpct=5", "         =               
        sunit=64     swidth=64 blks", "naming   =version 2              
bsize=4096   ascii-ci=0 ftype=0", "log      =internal log           bsize=4096  
 blocks=521728, version=2", "         =                       sectsz=512   
sunit=64 blks, lazy-count=1", "realtime =none                   extsz=4096   
blocks=0, rtextents=0", "The operation has completed successfully."], 
"warnings": []}
stderr: Failed to issue method call: No such file or directory
ceph-disk: Error: ceph osd start failed: Command '['/usr/bin/systemctl', 
'enable', 'ceph-osd@9']' returned non-zero exit status 1
stdout: /dev/sdb1 on /var/lib/ceph/osd/ceph-0 type xfs 
(rw,noatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
/dev/sdc1 on /var/lib/ceph/osd/ceph-2 type xfs 
(rw,noatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
/dev/sdd1 on /var/lib/ceph/osd/ceph-4 type xfs 
(rw,noatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
Creating new GPT entries.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
The operation has completed successfully.
The operation has completed successfully.
meta-data=/dev/sde1              isize=2048   agcount=32, agsize=45744064 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=1463810048, imaxpct=5
         =                       sunit=64     swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=64 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
The operation has completed successfully.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to