Re: [ceph-users] Mounting with dmcrypt still fails

2014-03-23 Thread Michael Lukzak
Hi,

After looking to code in ceph-disk I came to the same conclusion, problem is 
with the mapping.

Here are quote form ceph-disk

def get_partition_dev(dev, pnum):

get the device name for a partition

assume that partitions are named like the base dev, with a number, and 
optionally
some intervening characters (like 'p').  e.g.,

   sda 1 - sda1
   cciss/c0d1 1 - cciss!c0d1p1


Script are looking for partitions labeled as sdb[X] or p[X], where [x] 
means number of partitions (counted from 1).
Dm-crypt are creating some new mapping in /dev/mapper/, example 
/dev/mapper/osd0 as main block device and /dev/mapper/osdp1 as first partition 
and /dev/mapper/osdp2 as second partition.

But real path to osd0 device is NOT /dev/mapper/osd0 but /dev/dm-0 (sic!), and 
/dev/dm-1 is as first partition (osdp1), /dev/dm-2 is as second partition 
(osdp2).

Conlusion. If we are using dm-crypt the script in ceph-disk should not looking 
partitions like sda partition 1 - sda1 or osd0 partition 1- osdp1 but should 
looking for partitions labeled as /dev/dm-X (counted from 1).

Block deviceReal path
/dev/mapper/osd0 - /dev/dm-0

First partition   Real path
/dev/mapper/osd0p1 - /dev/dm-1

Second partition  Real path
/dev/mapper/osd0p2 - /dev/dm-2

Continuing, 'ceph-disk activate' should mount dm-crypted partitions not by 
using /dev/disk/by-partuuid, but /dev/disk/by-uuid

--
Best regards,
Michel Lukzak



 ceph-disk-prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir 
 /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
 ceph-disk: Error: Device /dev/sdb2 is in use by a device-mapper mapping 
 (dm-crypt?): dm-0

 It sounds like device-mapper still thinks it's using the the volume,
 you might be able to track it down with this:

 for i in `ls -1 /sys/block/ | grep sd`; do echo $i: `ls
 /sys/block/$i/${i}1/holders/`; done

 Then it's a matter of making sure there are no open file handles on
 the encrypted volume and unmounting it. You will still need to
 completely clear out the partition table on that disk, which can be
 tricky with GPT because it's not as simple as dd'in the start of the
 volume. This is what the zapdisk parameter is for in
 ceph-disk-prepare, I don't know enough about ceph-deploy to know if
 you can somehow pass it.

 After you know the device/dm mapping you can use udevadm to find out
 where it should map to (uuids replaced with xxx's):

 udevadm test /block/sdc/sdc1
 snip
 run: '/sbin/cryptsetup --key-file /etc/ceph/dmcrypt-keys/x
 --key-size 256 create  /dev/sdc1'
 run: '/bin/bash -c 'while [ ! -e /dev/mapper/x ];do sleep 1; done''
 run: '/usr/sbin/ceph-disk-activate /dev/mapper/x'


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Mounting with dmcrypt still fails

2014-03-22 Thread Kyle Bader
 ceph-disk-prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir 
 /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
 ceph-disk: Error: Device /dev/sdb2 is in use by a device-mapper mapping 
 (dm-crypt?): dm-0

It sounds like device-mapper still thinks it's using the the volume,
you might be able to track it down with this:

for i in `ls -1 /sys/block/ | grep sd`; do echo $i: `ls
/sys/block/$i/${i}1/holders/`; done

Then it's a matter of making sure there are no open file handles on
the encrypted volume and unmounting it. You will still need to
completely clear out the partition table on that disk, which can be
tricky with GPT because it's not as simple as dd'in the start of the
volume. This is what the zapdisk parameter is for in
ceph-disk-prepare, I don't know enough about ceph-deploy to know if
you can somehow pass it.

After you know the device/dm mapping you can use udevadm to find out
where it should map to (uuids replaced with xxx's):

udevadm test /block/sdc/sdc1
snip
run: '/sbin/cryptsetup --key-file /etc/ceph/dmcrypt-keys/x
--key-size 256 create  /dev/sdc1'
run: '/bin/bash -c 'while [ ! -e /dev/mapper/x ];do sleep 1; done''
run: '/usr/sbin/ceph-disk-activate /dev/mapper/x'

-- 

Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Mounting with dmcrypt still fails

2014-03-17 Thread Michal Luczak
Hi,

I tried to use whole new blank disk to create two separate partition (one for 
data and second for journal)
and use dmcrypt, but there is a problem with use this. It's looks like there is 
a problem with mounting or
formatting partitions.

OS is Ubuntu 13.04 with ceph v0.72 (emperor)

I used command:

ceph-deploy osd prepare ceph-node0:sdb --dmcrypt --dmcrypt-key-dir=/root 
--fs-type=xfs

[ceph-node0][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[ceph-node0][DEBUG ] Creating new GPT entries.
[ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] Information: Moved requested sector from 10485761 to 
10487808 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] meta-data=/dev/mapper/d92421e6-27c3-498a-9754-b5c10a281500 
isize=2048   agcount=4, agsize=720831 blks
[ceph-node0][DEBUG ]  =   sectsz=512   attr=2, 
projid32bit=0
[ceph-node0][DEBUG ] data =   bsize=4096   
blocks=2883323, imaxpct=25
[ceph-node0][DEBUG ]  =   sunit=0  swidth=0 blks
[ceph-node0][DEBUG ] naming   =version 2  bsize=4096   ascii-ci=0
[ceph-node0][DEBUG ] log  =internal log   bsize=4096   blocks=2560, 
version=2
[ceph-node0][DEBUG ]  =   sectsz=512   sunit=0 
blks, lazy-count=1
[ceph-node0][DEBUG ] realtime =none   extsz=4096   blocks=0, 
rtextents=0
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][DEBUG ] Host ceph-node0 is now ready for osd use.

Here it's look like all was good, but on the host ceph-node0 (where is disk 
sdb) is a problem.
Here are dump from syslog (at ceph-node0)

Mar 17 14:03:02 ceph-node0 kernel: [   68.645938] sd 2:0:1:0: [sdb] Cache data 
unavailable
Mar 17 14:03:02 ceph-node0 kernel: [   68.645943] sd 2:0:1:0: [sdb] Assuming 
drive cache: write through
Mar 17 14:03:02 ceph-node0 kernel: [   68.708930]  sdb: sdb1 sdb2
Mar 17 14:03:02 ceph-node0 kernel: [   68.996013] bio: create slab bio-1 at 1
Mar 17 14:03:03 ceph-node0 kernel: [   69.613407] SGI XFS with ACLs, security 
attributes, realtime, large block/inode numbers, no debug enabled
Mar 17 14:03:03 ceph-node0 kernel: [   69.619904] XFS (dm-0): Mounting 
Filesystem
Mar 17 14:03:03 ceph-node0 kernel: [   69.658693] XFS (dm-0): Ending clean mount
Mar 17 14:03:04 ceph-node0 kernel: [   70.745337] sd 2:0:1:0: [sdb] Cache data 
unavailable
Mar 17 14:03:04 ceph-node0 kernel: [   70.745342] sd 2:0:1:0: [sdb] Assuming 
drive cache: write through
Mar 17 14:03:04 ceph-node0 kernel: [   70.750667]  sdb: sdb1 sdb2
Mar 17 14:04:05 ceph-node0 udevd[515]: timeout: killing '/bin/bash -c 'while [ 
! -e /dev/mapper/d92421e6-27c3-498a-9754-b5c10a281500 ];do sleep 1; done'' 
[1903]
Mar 17 14:04:05 ceph-node0 udevd[515]: '/bin/bash -c 'while [ ! -e 
/dev/mapper/d92421e6-27c3-498a-9754-b5c10a281500 ];do sleep 1; done'' [1903] 
terminated by signal 9 (Killed)
Mar 17 14:05:07 ceph-node0 udevd[515]: timeout: killing '/bin/bash -c 'while [ 
! -e /dev/mapper/d92421e6-27c3-498a-9754-b5c10a281500 ];do sleep 1; done'' 
[2215]
Mar 17 14:05:07 ceph-node0 udevd[515]: '/bin/bash -c 'while [ ! -e 
/dev/mapper/d92421e6-27c3-498a-9754-b5c10a281500 ];do sleep 1; done'' [2215] 
terminated by signal 9 (Killed)

Two partitions (sdb1 and sdb2) are created, but it looks like is a problem with 
mounting or formating it? I can't figure out.

parted show that sdb1 and sdb2 exists, but in collumn filesystem is empty
 2 1049kB5369MB  5368MB  ceph journal
 1 5370MB17,2GB  11,8GBceph data

Keys for dmcrypt are stored in /root

So lets try without switch --dmcrypt
ceph-deploy osd prepare ceph-node0:sdb --fs-type=xfs
[ceph_deploy.cli][INFO  ] Invoked (1.3.5): /usr/bin/ceph-deploy osd prepare 
ceph-node0:sdb --fs-type=xfs
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node0:/dev/sdb:
[ceph-node0][DEBUG ] connected to host: ceph-node0
[ceph-node0][DEBUG ] detect platform information from remote host
[ceph-node0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 13.04 raring
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
[ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node0][INFO  ] Running command: udevadm trigger --subsystem-match=block 
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal None 
activate False
[ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster 
ceph -- /dev/sdb
[ceph-node0][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph-node0][DEBUG 

Re: [ceph-users] Mounting with dmcrypt still fails

2014-03-17 Thread Michael Lukzak
Hi again,

I used another host for osd (with that same name), but now with Debian 7.4

ceph-deploy osd prepare ceph-node0:sdb --dmcrypt

[ceph_deploy.cli][INFO  ] Invoked (1.3.5): /usr/bin/ceph-deploy osd prepare 
ceph-node0:sdb --dmcrypt
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node0:/dev/sdb:
[ceph-node0][DEBUG ] connected to host: ceph-node0
[ceph-node0][DEBUG ] detect platform information from remote host
[ceph-node0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: debian 7.4 wheezy
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
[ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node0][WARNIN] osd keyring does not exist yet, creating one
[ceph-node0][DEBUG ] create a keyring file
[ceph-node0][INFO  ] Running command: udevadm trigger --subsystem-match=block 
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal None 
activate False
[ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --dmcrypt 
--dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
[ceph-node0][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[ceph-node0][WARNIN] ceph-disk: Error: partition 1 for /dev/sdb does not appear 
to exist
[ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] Information: Moved requested sector from 10485761 to 
10487808 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] Warning: The kernel is still using the old partition table.
[ceph-node0][DEBUG ] The new table will be used at the next reboot.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare 
--fs-type xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph 
-- /dev/sdb
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs


This subcommand fails on OSD host - ceph-disk-prepare

I run this command on ceph-node0, and...

ceph-disk-prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir 
/etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
ceph-disk: Error: Device /dev/sdb2 is in use by a device-mapper mapping 
(dm-crypt?): dm-0

I can reply this error with 100% guarantee.
I used Debian 7.4 and Ubuntu 13.04 for test.

Best Regards,
Michael Lukzak


Hi,

I tried to use whole new blank disk to create two separate partition (one for 
data and second for journal)
and use dmcrypt, but there is a problem with use this. It's looks like there is 
a problem with mounting or
formatting partitions.

OS is Ubuntu 13.04 with ceph v0.72 (emperor)

I used command:

ceph-deploy osd prepare ceph-node0:sdb --dmcrypt --dmcrypt-key-dir=/root 
--fs-type=xfs

[ceph-node0][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[ceph-node0][DEBUG ] Creating new GPT entries.
[ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] Information: Moved requested sector from 10485761 to 
10487808 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] meta-data=/dev/mapper/d92421e6-27c3-498a-9754-b5c10a281500 
isize=2048   agcount=4, agsize=720831 blks
[ceph-node0][DEBUG ]  =   sectsz=512   attr=2, 
projid32bit=0
[ceph-node0][DEBUG ] data =   bsize=4096   
blocks=2883323, imaxpct=25
[ceph-node0][DEBUG ]  =   sunit=0  swidth=0 blks
[ceph-node0][DEBUG ] naming   =version 2  bsize=4096   ascii-ci=0
[ceph-node0][DEBUG ] log  =internal log   bsize=4096   blocks=2560, 
version=2
[ceph-node0][DEBUG ]  =   sectsz=512   sunit=0 
blks, lazy-count=1
[ceph-node0][DEBUG ] realtime =none   extsz=4096   blocks=0, 
rtextents=0
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][DEBUG ] Host ceph-node0 is now ready for osd use.

Here it's look like all was good, but on the host ceph-node0 (where is disk 
sdb) is a problem.
Here are dump from syslog (at ceph-node0)

Mar 17 14:03:02 ceph-node0 kernel: [   68.645938] sd 2:0:1:0: [sdb] Cache data 
unavailable
Mar 17 14:03:02 ceph-node0 kernel: [   68.645943] sd 2:0:1:0: [sdb] Assuming 
drive cache: write through
Mar 17 14:03:02 ceph-node0 kernel: [   68.708930]  sdb: sdb1 sdb2
Mar 17 14:03:02 ceph-node0 kernel: [   68.996013] bio: create slab bio-1 at 1
Mar 17 14:03:03 ceph-node0 kernel: [   69.613407] SGI XFS with ACLs, security 
attributes, realtime, large block/inode numbers, no debug