Re: [ceph-users] Can't activate OSD with journal and data on the same disk

2013-11-08 Thread Gregory Farnum
I made a ticket for this: http://tracker.ceph.com/issues/6740
Thanks for the bug report!
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Fri, Nov 8, 2013 at 1:51 AM, Michael Lukzak  wrote:
> Hi,
>
> News. I tried activate disk without --dmcrypt and there is no problem. After
> activate on sdb are two partitions (sdb2 for jounral and sdb1 for data).
>
> In my opinion there is a bug with switch --dmcrypt and activating
> journal on disk (partitions are created, but mounting done by ceph-disk fail).
>
> Here are logs without --dmcrypt
>
> root@ceph-deploy:~/ceph# ceph-deploy osd prepare ceph-node0:/dev/sdb
> [ceph_deploy.cli][INFO  ] Invoked (1.3.1): /usr/bin/ceph-deploy osd prepare 
> ceph-node0:/dev/sdb
> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node0:/dev/sdb:
> [ceph-node0][DEBUG ] connected to host: ceph-node0
> [ceph-node0][DEBUG ] detect platform information from remote host
> [ceph-node0][DEBUG ] detect machine type
> [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 13.04 raring
> [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
> [ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
> [ceph-node0][INFO  ] Running command: udevadm trigger --subsystem-match=block 
> --action=add
> [ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal 
> None activate False
> [ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type xfs 
> --cluster ceph -- /dev/sdb
> [ceph-node0][ERROR ] INFO:ceph-disk:Will colocate journal with data on 
> /dev/sdb
> [ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
> [ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
> [ceph-node0][DEBUG ] The operation has completed successfully.
> [ceph-node0][DEBUG ] Information: Moved requested sector from 2097153 to 
> 2099200 in
> [ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
> [ceph-node0][DEBUG ] The operation has completed successfully.
> [ceph-node0][DEBUG ] meta-data=/dev/sdb1  isize=2048   agcount=4, 
> agsize=917439 blks
> [ceph-node0][DEBUG ]  =   sectsz=512   attr=2, 
> projid32bit=0
> [ceph-node0][DEBUG ] data =   bsize=4096   
> blocks=3669755, imaxpct=25
> [ceph-node0][DEBUG ]  =   sunit=0  swidth=0 
> blks
> [ceph-node0][DEBUG ] naming   =version 2  bsize=4096   ascii-ci=0
> [ceph-node0][DEBUG ] log  =internal log   bsize=4096   
> blocks=2560, version=2
> [ceph-node0][DEBUG ]  =   sectsz=512   sunit=0 
> blks, lazy-count=1
> [ceph-node0][DEBUG ] realtime =none   extsz=4096   blocks=0, 
> rtextents=0
> [ceph-node0][DEBUG ] The operation has completed successfully.
> [ceph_deploy.osd][DEBUG ] Host ceph-node0 is now ready for osd use.
>
> Disk are properly activated. With --dmcrypt journal partition are not
> propery mounted and ceph-disk cannot use it.
>
> Best Regards,
> Michael
>
>
>>
>>  Hi!
>>
>>  I have a question about activating OSD on whole disk. I can't bypass this 
>> issue.
>>  Conf spec: 8 VMs - ceph-deploy; ceph-admin; ceph-mon0-2 and ceph-node0-2;
>>
>>  I started from creating MON - all good .
>>  After that I want to prepare and activate 3x OSD with dm-crypt.
>>
>>  So I put on ceph.conf this
>>
>>  [osd.0]
>>  host = ceph-node0
>>  cluster addr = 10.0.0.75:6800
>>  public addr = 10.0.0.75:6801
>>  devs = /dev/sdb
>>
>>  Next I use ceph-deploy to activate a OSD and this shows
>>
>>  root@ceph-deploy:~/ceph# ceph-deploy osd prepare ceph-node0:/dev/sdb 
>> --dmcrypt
>>  [ceph_deploy.cli][INFO  ] Invoked (1.3.1): /usr/bin/ceph-deploy
>> osd prepare ceph-node0:/dev/sdb --dmcrypt
>>  [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node0:/dev/sdb:
>>  [ceph-node0][DEBUG ] connected to host: ceph-node0
>>  [ceph-node0][DEBUG ] detect platform information from remote host
>>  [ceph-node0][DEBUG ] detect machine type
>>  [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 13.04 raring
>>  [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
>>  [ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>>  [ceph-node0][INFO  ] Running command: udevadm trigger
>> --subsystem-match=block --action=add
>>  [ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal 
>> None activate False
>>  [ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type
>> xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- 
>> /dev/sdb
>>  [ceph-node0][ERROR ] INFO:ceph-disk:Will colocate journal with data on 
>> /dev/sdb
>>  [ceph-node0][ERROR ] ceph-disk: Error: partition 1 for /dev/sdb does not 
>> appear to exist
>>  [ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
>>  [ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
>>  [ceph-node0][DEBUG ] The operation has completed successfully.
>>  [ceph-node0]

Re: [ceph-users] Can't activate OSD with journal and data on the same disk

2013-11-08 Thread Michael Lukzak
Hi,

News. I tried activate disk without --dmcrypt and there is no problem. After
activate on sdb are two partitions (sdb2 for jounral and sdb1 for data).

In my opinion there is a bug with switch --dmcrypt and activating
journal on disk (partitions are created, but mounting done by ceph-disk fail).

Here are logs without --dmcrypt

root@ceph-deploy:~/ceph# ceph-deploy osd prepare ceph-node0:/dev/sdb
[ceph_deploy.cli][INFO  ] Invoked (1.3.1): /usr/bin/ceph-deploy osd prepare 
ceph-node0:/dev/sdb
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node0:/dev/sdb:
[ceph-node0][DEBUG ] connected to host: ceph-node0
[ceph-node0][DEBUG ] detect platform information from remote host
[ceph-node0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 13.04 raring
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
[ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node0][INFO  ] Running command: udevadm trigger --subsystem-match=block 
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal None 
activate False
[ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster 
ceph -- /dev/sdb
[ceph-node0][ERROR ] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] Information: Moved requested sector from 2097153 to 
2099200 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] meta-data=/dev/sdb1  isize=2048   agcount=4, 
agsize=917439 blks
[ceph-node0][DEBUG ]  =   sectsz=512   attr=2, 
projid32bit=0
[ceph-node0][DEBUG ] data =   bsize=4096   
blocks=3669755, imaxpct=25
[ceph-node0][DEBUG ]  =   sunit=0  swidth=0 blks
[ceph-node0][DEBUG ] naming   =version 2  bsize=4096   ascii-ci=0
[ceph-node0][DEBUG ] log  =internal log   bsize=4096   blocks=2560, 
version=2
[ceph-node0][DEBUG ]  =   sectsz=512   sunit=0 
blks, lazy-count=1
[ceph-node0][DEBUG ] realtime =none   extsz=4096   blocks=0, 
rtextents=0
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][DEBUG ] Host ceph-node0 is now ready for osd use.

Disk are properly activated. With --dmcrypt journal partition are not
propery mounted and ceph-disk cannot use it.

Best Regards,
Michael


>  
>  Hi!
>  
>  I have a question about activating OSD on whole disk. I can't bypass this 
> issue.
>  Conf spec: 8 VMs - ceph-deploy; ceph-admin; ceph-mon0-2 and ceph-node0-2;
>  
>  I started from creating MON - all good .
>  After that I want to prepare and activate 3x OSD with dm-crypt.
>  
>  So I put on ceph.conf this
>  
>  [osd.0]
>  host = ceph-node0
>  cluster addr = 10.0.0.75:6800
>  public addr = 10.0.0.75:6801
>  devs = /dev/sdb
>  
>  Next I use ceph-deploy to activate a OSD and this shows
>  
>  root@ceph-deploy:~/ceph# ceph-deploy osd prepare ceph-node0:/dev/sdb 
> --dmcrypt
>  [ceph_deploy.cli][INFO  ] Invoked (1.3.1): /usr/bin/ceph-deploy
> osd prepare ceph-node0:/dev/sdb --dmcrypt
>  [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node0:/dev/sdb:
>  [ceph-node0][DEBUG ] connected to host: ceph-node0
>  [ceph-node0][DEBUG ] detect platform information from remote host
>  [ceph-node0][DEBUG ] detect machine type
>  [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 13.04 raring
>  [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
>  [ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>  [ceph-node0][INFO  ] Running command: udevadm trigger
> --subsystem-match=block --action=add
>  [ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal 
> None activate False
>  [ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type
> xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- 
> /dev/sdb
>  [ceph-node0][ERROR ] INFO:ceph-disk:Will colocate journal with data on 
> /dev/sdb
>  [ceph-node0][ERROR ] ceph-disk: Error: partition 1 for /dev/sdb does not 
> appear to exist
>  [ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
>  [ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
>  [ceph-node0][DEBUG ] The operation has completed successfully.
>  [ceph-node0][DEBUG ] Information: Moved requested sector from 2097153 to 
> 2099200 in
>  [ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
>  [ceph-node0][DEBUG ] Warning: The kernel is still using the old partition 
> table.
>  [ceph-node0][DEBUG ] The new table will be used at the next reboot.
>  [ceph-node0][DEBUG ] The operation has completed successf

[ceph-users] Can't activate OSD with journal and data on the same disk

2013-11-07 Thread Michael Lukzak
Hi!

I have a question about activating OSD on whole disk. I can't bypass this issue.
Conf spec: 8 VMs - ceph-deploy; ceph-admin; ceph-mon0-2 and ceph-node0-2;

I started from creating MON - all good .
After that I want to prepare and activate 3x OSD with dm-crypt.

So I put on ceph.conf this

[osd.0]
host = ceph-node0
cluster addr = 10.0.0.75:6800
public addr = 10.0.0.75:6801
devs = /dev/sdb

Next I use ceph-deploy to activate a OSD and this shows

root@ceph-deploy:~/ceph# ceph-deploy osd prepare ceph-node0:/dev/sdb --dmcrypt
[ceph_deploy.cli][INFO  ] Invoked (1.3.1): /usr/bin/ceph-deploy osd prepare 
ceph-node0:/dev/sdb --dmcrypt
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node0:/dev/sdb:
[ceph-node0][DEBUG ] connected to host: ceph-node0
[ceph-node0][DEBUG ] detect platform information from remote host
[ceph-node0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 13.04 raring
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
[ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node0][INFO  ] Running command: udevadm trigger --subsystem-match=block 
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal None 
activate False
[ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --dmcrypt 
--dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
[ceph-node0][ERROR ] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[ceph-node0][ERROR ] ceph-disk: Error: partition 1 for /dev/sdb does not appear 
to exist
[ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] Information: Moved requested sector from 2097153 to 
2099200 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] Warning: The kernel is still using the old partition table.
[ceph-node0][DEBUG ] The new table will be used at the next reboot.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][ERROR ] Traceback (most recent call last):
[ceph-node0][ERROR ]   File 
"/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/process.py", line 68, 
in run
[ceph-node0][ERROR ] reporting(conn, result, timeout)
[ceph-node0][ERROR ]   File 
"/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/log.py", line 13, in 
reporting
[ceph-node0][ERROR ] received = result.receive(timeout)
[ceph-node0][ERROR ]   File 
"/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py",
 line 455, in receive
[ceph-node0][ERROR ] raise self._getremoteerror() or EOFError()
[ceph-node0][ERROR ] RemoteError: Traceback (most recent call last):
[ceph-node0][ERROR ]   File "", line 806, in executetask
[ceph-node0][ERROR ]   File "", line 35, in _remote_run
[ceph-node0][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph-node0][ERROR ]
[ceph-node0][ERROR ]
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare 
--fs-type xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph 
-- /dev/sdb
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs


It's looks like ceph-disk-prepare can't mount (activate?) the one of disk.
So I go to ceph-node0 and listed disk, this shows:

root@ceph-node0:~# ls /dev/sd
sda   sda1  sda2  sda5  sdb   sdb2

Ups - there are no sdb1. 

So I printed all partitions on /dev/sdb and there is two:

Number  Beg End  Size  Filesystem  Name Flags
 2 1049kB1074MB  1073MB  ceph journal
 1 1075MB16,1GB  15,0GB  ceph data

Where sdb1 should be for data and sdb2 for journal. 

When I restart the VM /dev/sdb1 start showing.
root@ceph-node0:~# ls /dev/sd
sda   sda1  sda2  sda5  sdb   sdb1   sdb2 
But I cant mount 

When I put journal to separate file/disk, there is no problem with activating 
(journal are on separate disk, and all partition data are on sdb1).
There is log from this acction (I put journal to file in /mnt/sdb2)

root@ceph-deploy:~/ceph# ceph-deploy osd prepare ceph-node0:/dev/sdb:/mnt/sdb2 
--dmcrypt
[ceph_deploy.cli][INFO  ] Invoked (1.3.1): /usr/bin/ceph-deploy osd prepare 
ceph-node0:/dev/sdb:/mnt/sdb2 --dmcrypt
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
ceph-node0:/dev/sdb:/mnt/sdb2
[ceph-node0][DEBUG ] connected to host: ceph-node0
[ceph-node0][DEBUG ] detect platform information from remote host
[ceph-node0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 13.04 raring
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
[ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node0][INFO  ] Running command: udevadm trigger --subsystem-match=block 
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal 
/mnt/sdb2