Re: [ceph-users] NGINX and 100-Continue

2014-05-29 Thread Michael Lukzak
Hi,

Yes, I read this. I responded with new question.
I tried to use tengine, but my problem was not solved.
I can't find solution. 

You don't have this problem in your case?
I'm using boto (client for S3), at the end when file
is uploaded at 100%, boto hangs after command PUT /[file] 100, example:

IP_REMOVED - - [29/May/2014:11:27:53 +] PUT /foo.bar HTTP/1.1 100 0 - 
Boto/2.27.0 Python/2.7.6 Linux/3.13.0-24-generic

Transfer ending when boto get timeout.
When I switch to Apache there are no problem with that. 

Some sources from Intenrert report that this might be involved with mod fastcgi 
(fcgi can't handle this, only fastcgi can handle http 100-Continue). I can't 
find how this might be correlated with fastcgi in nginx.

Michael



Don't use nginx.  The current version buffers all the uploads to the local 
disk, which causes all sorts of problems with radosgw (timeouts, clock skew 
errors, etc).  Use tengine instead (or apache).  I sent the mailing list some 
info on tengine a couple weeks ago.

On 5/29/2014 6:11 AM, Michael Lukzak wrote:
NGINX and 100-Continue Hi,

I have a question about Nginx and 100-Continue.
If I use client like boto or Cyberduck all works fine,
but when I want to upload file on 100% upload, progress bar
hangs and after about 30s Cyberduck reports that 
HTTP 100-Continue timeouted.

I use nginx v1.4.1
Only when I use Apache 2 with fastcgi all works fine.

Anyone has a solution?

My nginx config is, works with socket from RADOS.

server {
   listen 80;
   server_name cli.rados *.cli.rados;
server_tokens off;

   client_max_body_size 1g;

   location / {
   fastcgi_pass_header Authorization;
   fastcgi_pass_request_headers on;
   if ($request_method = PUT ) {
   rewrite ^ /PUT$request_uri;
   }
   include fastcgi_params;
   client_max_body_size 0;
   fastcgi_busy_buffers_size 512k;
   fastcgi_buffer_size 512k;
   fastcgi_buffers 16 512k;
   fastcgi_read_timeout 2s;
   fastcgi_send_timeout 1s;
   fastcgi_connect_timeout 1s;
   fastcgi_next_upstream error timeout http_500 http_503;

   fastcgi_pass unix:/var/run/radosgw.sock;
   }

   location /PUT/ {
   internal;
   fastcgi_pass_header Authorization;
   fastcgi_pass_request_headers on;
   include fastcgi_params;
   client_max_body_size 0;
   fastcgi_param CONTENT_LENGTH $content_length;
   fastcgi_busy_buffers_size 512k;
   fastcgi_buffer_size 512k;
   fastcgi_buffers 16 512k;

   fastcgi_pass unix:/var/run/radosgw.sock;
   }
# end of http
}

Best Regards,
Michael 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] nginx (tengine) and radosgw

2014-05-29 Thread Michael Lukzak
Hi,

Ups, so I don't read carefully a doc...
I will try this solution. 

Thanks!

Michael


From the docs, you need this setting in ceph.conf (if you're using 
nginx/tengine):

rgw print continue = false

This will fix the 100-continue issues.

On 5/29/2014 5:56 AM, Michael Lukzak wrote:
Re[2]: [ceph-users] nginx (tengine) and radosgw Hi,

I'm also use tengine, works fine with SSL (I have a Wildcard).
But I have other issue with HTTP 100-Continue. 
Clients like boto or Cyberduck hangs if they can't make HTTP 100-Continue.

IP_REMOVED - - [29/May/2014:11:27:53 +] PUT 
/temp/1b6f6a11d7aa188f06f8255fdf0345b4 HTTP/1.1 100 0 - Boto/2.27.0 
Python/2.7.6 Linux/3.13.0-24-generic

Do You have also problem with that?
I used for testing oryginal nginx and also have a problem with 100-Continue.
Only Apache 2.x works fine.

BR,
Michael



I haven't tried SSL yet.  We currently don't have a wildcard certificate for 
this, so it hasn't been a concern (and our current use case, all the files are 
public anyway).

On 5/20/2014 4:26 PM, Andrei Mikhailovsky wrote:


That looks very interesting indeed. I've tried to use nginx, but from what I 
recall it had some ssl related issues. Have you tried to make the ssl work so 
that nginx acts as an ssl proxy in front of the radosgw?

Cheers

Andrei


From: Brian Rak b...@gameservers.com
To: ceph-users@lists.ceph.com
Sent: Tuesday, 20 May, 2014 9:11:58 PM
Subject: [ceph-users] nginx (tengine) and radosgw

I've just finished converting from nginx/radosgw to tengine/radosgw, and 
it's fixed all the weird issues I was seeing (uploads failing, random 
clock skew errors, timeouts).

The problem with nginx and radosgw is that nginx insists on buffering 
all the uploads to disk.  This causes a significant performance hit, and 
prevents larger uploads from working. Supposedly, there is going to be 
an option in nginx to disable this, but it hasn't been released yet (nor 
do I see anything on the nginx devel list about it).

tengine ( http://tengine.taobao.org/ ) is an nginx fork that implements 
unbuffered uploads to fastcgi.  It's basically a drop in replacement for 
nginx.

My configuration looks like this:

server {
listen 80;

server_name *.rados.test rados.test;

client_max_body_size 10g;
# This is the important option that tengine has, but nginx does not
fastcgi_request_buffering off;

location / {
fastcgi_pass_header Authorization;
fastcgi_pass_request_headers on;

if ($request_method  = PUT ) {
  rewrite ^ /PUT$request_uri;
}
include fastcgi_params;

fastcgi_pass unix:/path/to/ceph.radosgw.fastcgi.sock;
}

location /PUT/ {
internal;
fastcgi_pass_header Authorization;
fastcgi_pass_request_headers on;

include fastcgi_params;
fastcgi_param  CONTENT_LENGTH   $content_length;

fastcgi_pass unix:/path/to/ceph.radosgw.fastcgi.sock;
}
}


if anyone else is looking to run radosgw without having to run apache, I 
would recommend you look into tengine :)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Mounting with dmcrypt still fails

2014-03-23 Thread Michael Lukzak
Hi,

After looking to code in ceph-disk I came to the same conclusion, problem is 
with the mapping.

Here are quote form ceph-disk

def get_partition_dev(dev, pnum):

get the device name for a partition

assume that partitions are named like the base dev, with a number, and 
optionally
some intervening characters (like 'p').  e.g.,

   sda 1 - sda1
   cciss/c0d1 1 - cciss!c0d1p1


Script are looking for partitions labeled as sdb[X] or p[X], where [x] 
means number of partitions (counted from 1).
Dm-crypt are creating some new mapping in /dev/mapper/, example 
/dev/mapper/osd0 as main block device and /dev/mapper/osdp1 as first partition 
and /dev/mapper/osdp2 as second partition.

But real path to osd0 device is NOT /dev/mapper/osd0 but /dev/dm-0 (sic!), and 
/dev/dm-1 is as first partition (osdp1), /dev/dm-2 is as second partition 
(osdp2).

Conlusion. If we are using dm-crypt the script in ceph-disk should not looking 
partitions like sda partition 1 - sda1 or osd0 partition 1- osdp1 but should 
looking for partitions labeled as /dev/dm-X (counted from 1).

Block deviceReal path
/dev/mapper/osd0 - /dev/dm-0

First partition   Real path
/dev/mapper/osd0p1 - /dev/dm-1

Second partition  Real path
/dev/mapper/osd0p2 - /dev/dm-2

Continuing, 'ceph-disk activate' should mount dm-crypted partitions not by 
using /dev/disk/by-partuuid, but /dev/disk/by-uuid

--
Best regards,
Michel Lukzak



 ceph-disk-prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir 
 /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
 ceph-disk: Error: Device /dev/sdb2 is in use by a device-mapper mapping 
 (dm-crypt?): dm-0

 It sounds like device-mapper still thinks it's using the the volume,
 you might be able to track it down with this:

 for i in `ls -1 /sys/block/ | grep sd`; do echo $i: `ls
 /sys/block/$i/${i}1/holders/`; done

 Then it's a matter of making sure there are no open file handles on
 the encrypted volume and unmounting it. You will still need to
 completely clear out the partition table on that disk, which can be
 tricky with GPT because it's not as simple as dd'in the start of the
 volume. This is what the zapdisk parameter is for in
 ceph-disk-prepare, I don't know enough about ceph-deploy to know if
 you can somehow pass it.

 After you know the device/dm mapping you can use udevadm to find out
 where it should map to (uuids replaced with xxx's):

 udevadm test /block/sdc/sdc1
 snip
 run: '/sbin/cryptsetup --key-file /etc/ceph/dmcrypt-keys/x
 --key-size 256 create  /dev/sdc1'
 run: '/bin/bash -c 'while [ ! -e /dev/mapper/x ];do sleep 1; done''
 run: '/usr/sbin/ceph-disk-activate /dev/mapper/x'


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Mounting with dmcrypt still fails

2014-03-17 Thread Michael Lukzak
Hi again,

I used another host for osd (with that same name), but now with Debian 7.4

ceph-deploy osd prepare ceph-node0:sdb --dmcrypt

[ceph_deploy.cli][INFO  ] Invoked (1.3.5): /usr/bin/ceph-deploy osd prepare 
ceph-node0:sdb --dmcrypt
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node0:/dev/sdb:
[ceph-node0][DEBUG ] connected to host: ceph-node0
[ceph-node0][DEBUG ] detect platform information from remote host
[ceph-node0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: debian 7.4 wheezy
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
[ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node0][WARNIN] osd keyring does not exist yet, creating one
[ceph-node0][DEBUG ] create a keyring file
[ceph-node0][INFO  ] Running command: udevadm trigger --subsystem-match=block 
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal None 
activate False
[ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --dmcrypt 
--dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
[ceph-node0][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[ceph-node0][WARNIN] ceph-disk: Error: partition 1 for /dev/sdb does not appear 
to exist
[ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] Information: Moved requested sector from 10485761 to 
10487808 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] Warning: The kernel is still using the old partition table.
[ceph-node0][DEBUG ] The new table will be used at the next reboot.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare 
--fs-type xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph 
-- /dev/sdb
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs


This subcommand fails on OSD host - ceph-disk-prepare

I run this command on ceph-node0, and...

ceph-disk-prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir 
/etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
ceph-disk: Error: Device /dev/sdb2 is in use by a device-mapper mapping 
(dm-crypt?): dm-0

I can reply this error with 100% guarantee.
I used Debian 7.4 and Ubuntu 13.04 for test.

Best Regards,
Michael Lukzak


Hi,

I tried to use whole new blank disk to create two separate partition (one for 
data and second for journal)
and use dmcrypt, but there is a problem with use this. It's looks like there is 
a problem with mounting or
formatting partitions.

OS is Ubuntu 13.04 with ceph v0.72 (emperor)

I used command:

ceph-deploy osd prepare ceph-node0:sdb --dmcrypt --dmcrypt-key-dir=/root 
--fs-type=xfs

[ceph-node0][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[ceph-node0][DEBUG ] Creating new GPT entries.
[ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] Information: Moved requested sector from 10485761 to 
10487808 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] meta-data=/dev/mapper/d92421e6-27c3-498a-9754-b5c10a281500 
isize=2048   agcount=4, agsize=720831 blks
[ceph-node0][DEBUG ]  =   sectsz=512   attr=2, 
projid32bit=0
[ceph-node0][DEBUG ] data =   bsize=4096   
blocks=2883323, imaxpct=25
[ceph-node0][DEBUG ]  =   sunit=0  swidth=0 blks
[ceph-node0][DEBUG ] naming   =version 2  bsize=4096   ascii-ci=0
[ceph-node0][DEBUG ] log  =internal log   bsize=4096   blocks=2560, 
version=2
[ceph-node0][DEBUG ]  =   sectsz=512   sunit=0 
blks, lazy-count=1
[ceph-node0][DEBUG ] realtime =none   extsz=4096   blocks=0, 
rtextents=0
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][DEBUG ] Host ceph-node0 is now ready for osd use.

Here it's look like all was good, but on the host ceph-node0 (where is disk 
sdb) is a problem.
Here are dump from syslog (at ceph-node0)

Mar 17 14:03:02 ceph-node0 kernel: [   68.645938] sd 2:0:1:0: [sdb] Cache data 
unavailable
Mar 17 14:03:02 ceph-node0 kernel: [   68.645943] sd 2:0:1:0: [sdb] Assuming 
drive cache: write through
Mar 17 14:03:02 ceph-node0 kernel: [   68.708930]  sdb: sdb1 sdb2
Mar 17 14:03:02 ceph-node0 kernel: [   68.996013] bio: create slab bio-1 at 1
Mar 17 14:03:03 ceph-node0 kernel: [   69.613407] SGI XFS with ACLs, security 
attributes, realtime, large block/inode numbers, no debug

Re: [ceph-users] Can't activate OSD with journal and data on the same disk

2013-11-08 Thread Michael Lukzak
 call last):
  [ceph-node0][ERROR ]   File
 /usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/process.py, line 
 68, in run
  [ceph-node0][ERROR ] reporting(conn, result, timeout)
  [ceph-node0][ERROR ]   File
 /usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/log.py, line 13, in 
 reporting
  [ceph-node0][ERROR ] received = result.receive(timeout)
  [ceph-node0][ERROR ]   File
 /usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py,
  line 455, in receive
  [ceph-node0][ERROR ] raise self._getremoteerror() or EOFError()
  [ceph-node0][ERROR ] RemoteError: Traceback (most recent call last):
  [ceph-node0][ERROR ]   File string, line 806, in executetask
  [ceph-node0][ERROR ]   File , line 35, in _remote_run
  [ceph-node0][ERROR ] RuntimeError: command returned non-zero exit status: 1
  [ceph-node0][ERROR ]
  [ceph-node0][ERROR ]
  [ceph_deploy.osd][ERROR ] Failed to execute command:
 ceph-disk-prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir
 /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
  [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
  
  
  It's looks like ceph-disk-prepare can't mount (activate?) the one of disk.
  So I go to ceph-node0 and listed disk, this shows:
  
  root@ceph-node0:~# ls /dev/sd
  sda   sda1  sda2  sda5  sdb   sdb2
  
  Ups - there are no sdb1. 
  
  So I printed all partitions on /dev/sdb and there is two:
  
  Number  Beg  End  Size  Filesystem  Name Flags
   2 1049kB1074MB  1073MB  ceph journal
   1 1075MB16,1GB  15,0GB  ceph data
  
  Where sdb1 should be for data and sdb2 for journal. 
  
  When I restart the VM /dev/sdb1 start showing.
  root@ceph-node0:~# ls /dev/sd
  sda   sda1  sda2  sda5  sdb   sdb1   sdb2 
  But I cant mount 
  
  When I put journal to separate file/disk, there is no problem with activating
  (journal are on separate disk, and all partition data are on sdb1).
  There is log from this acction (I put journal to file in /mnt/sdb2)
  
  root@ceph-deploy:~/ceph# ceph-deploy osd prepare
 ceph-node0:/dev/sdb:/mnt/sdb2 --dmcrypt
  [ceph_deploy.cli][INFO  ] Invoked (1.3.1): /usr/bin/ceph-deploy
 osd prepare ceph-node0:/dev/sdb:/mnt/sdb2 --dmcrypt
  [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
 ceph-node0:/dev/sdb:/mnt/sdb2
  [ceph-node0][DEBUG ] connected to host: ceph-node0
  [ceph-node0][DEBUG ] detect platform information from remote host
  [ceph-node0][DEBUG ] detect machine type
  [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 13.04 raring
  [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
  [ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  [ceph-node0][INFO  ] Running command: udevadm trigger
 --subsystem-match=block --action=add
  [ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal 
 /mnt/sdb2 activate False
  [ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type
 xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- 
 /dev/sdb /mnt/sdb2
  [ceph-node0][ERROR ] WARNING:ceph-disk:OSD will not be
 hot-swappable if journal is not the same device as the osd data
  [ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
  [ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
  [ceph-node0][DEBUG ] The operation has completed successfully.
  [ceph-node0][DEBUG ]
 meta-data=/dev/mapper/299e07ff-31bf-49c4-a8de-62a8e4203c04
 isize=2048   agcount=4, agsize=982975 blks
  [ceph-node0][DEBUG ]  =   sectsz=512   attr=2, 
 projid32bit=0
  [ceph-node0][DEBUG ] data =   bsize=4096   
 blocks=3931899, imaxpct=25
  [ceph-node0][DEBUG ]  =   sunit=0  swidth=0 
 blks
  [ceph-node0][DEBUG ] naming   =version 2  bsize=4096   ascii-ci=0
  [ceph-node0][DEBUG ] log  =internal log   bsize=4096   
 blocks=2560, version=2
  [ceph-node0][DEBUG ]  =   sectsz=512   sunit=0 
 blks, lazy-count=1
  [ceph-node0][DEBUG ] realtime =none   extsz=4096   blocks=0, 
 rtextents=0
  [ceph-node0][DEBUG ] The operation has completed successfully.
  [ceph_deploy.osd][DEBUG ] Host ceph-node0 is now ready for osd use.
  
  Listed partitions (there are only one)
  Number  Beg  End  Size  Filesystem  Name Flags
   1 1049kB16,1GB  16,1GB  ceph data
  
  Keys are properly stored in /etc/ceph/dmcrypt-keys/ and sdb1 are
 mounted to /var/lib/ceph/osd/ceph-1
  Ceph start showing that this OSD are in cluster. Yupi. But this is no 
 solution to me ;)
  
  So the final question is - Where I'm wrong? Why I can't activate journal on 
 the same disk.
  
  -- 
  Best Regards,
   Michael Lukzak


-- 
Pozdrowienia,
 Michael Lukzak

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi

[ceph-users] Can't activate OSD with journal and data on the same disk

2013-11-07 Thread Michael Lukzak
Hi!

I have a question about activating OSD on whole disk. I can't bypass this issue.
Conf spec: 8 VMs - ceph-deploy; ceph-admin; ceph-mon0-2 and ceph-node0-2;

I started from creating MON - all good .
After that I want to prepare and activate 3x OSD with dm-crypt.

So I put on ceph.conf this

[osd.0]
host = ceph-node0
cluster addr = 10.0.0.75:6800
public addr = 10.0.0.75:6801
devs = /dev/sdb

Next I use ceph-deploy to activate a OSD and this shows

root@ceph-deploy:~/ceph# ceph-deploy osd prepare ceph-node0:/dev/sdb --dmcrypt
[ceph_deploy.cli][INFO  ] Invoked (1.3.1): /usr/bin/ceph-deploy osd prepare 
ceph-node0:/dev/sdb --dmcrypt
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node0:/dev/sdb:
[ceph-node0][DEBUG ] connected to host: ceph-node0
[ceph-node0][DEBUG ] detect platform information from remote host
[ceph-node0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 13.04 raring
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
[ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node0][INFO  ] Running command: udevadm trigger --subsystem-match=block 
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal None 
activate False
[ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --dmcrypt 
--dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
[ceph-node0][ERROR ] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[ceph-node0][ERROR ] ceph-disk: Error: partition 1 for /dev/sdb does not appear 
to exist
[ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] Information: Moved requested sector from 2097153 to 
2099200 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] Warning: The kernel is still using the old partition table.
[ceph-node0][DEBUG ] The new table will be used at the next reboot.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][ERROR ] Traceback (most recent call last):
[ceph-node0][ERROR ]   File 
/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/process.py, line 68, 
in run
[ceph-node0][ERROR ] reporting(conn, result, timeout)
[ceph-node0][ERROR ]   File 
/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/log.py, line 13, in 
reporting
[ceph-node0][ERROR ] received = result.receive(timeout)
[ceph-node0][ERROR ]   File 
/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py,
 line 455, in receive
[ceph-node0][ERROR ] raise self._getremoteerror() or EOFError()
[ceph-node0][ERROR ] RemoteError: Traceback (most recent call last):
[ceph-node0][ERROR ]   File string, line 806, in executetask
[ceph-node0][ERROR ]   File , line 35, in _remote_run
[ceph-node0][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph-node0][ERROR ]
[ceph-node0][ERROR ]
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare 
--fs-type xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph 
-- /dev/sdb
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs


It's looks like ceph-disk-prepare can't mount (activate?) the one of disk.
So I go to ceph-node0 and listed disk, this shows:

root@ceph-node0:~# ls /dev/sd
sda   sda1  sda2  sda5  sdb   sdb2

Ups - there are no sdb1. 

So I printed all partitions on /dev/sdb and there is two:

Number  Beg End  Size  Filesystem  Name Flags
 2 1049kB1074MB  1073MB  ceph journal
 1 1075MB16,1GB  15,0GB  ceph data

Where sdb1 should be for data and sdb2 for journal. 

When I restart the VM /dev/sdb1 start showing.
root@ceph-node0:~# ls /dev/sd
sda   sda1  sda2  sda5  sdb   sdb1   sdb2 
But I cant mount 

When I put journal to separate file/disk, there is no problem with activating 
(journal are on separate disk, and all partition data are on sdb1).
There is log from this acction (I put journal to file in /mnt/sdb2)

root@ceph-deploy:~/ceph# ceph-deploy osd prepare ceph-node0:/dev/sdb:/mnt/sdb2 
--dmcrypt
[ceph_deploy.cli][INFO  ] Invoked (1.3.1): /usr/bin/ceph-deploy osd prepare 
ceph-node0:/dev/sdb:/mnt/sdb2 --dmcrypt
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
ceph-node0:/dev/sdb:/mnt/sdb2
[ceph-node0][DEBUG ] connected to host: ceph-node0
[ceph-node0][DEBUG ] detect platform information from remote host
[ceph-node0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 13.04 raring
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
[ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node0][INFO  ] Running command: udevadm trigger --subsystem-match=block 
--action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal 
/mnt/sdb2