Re: [ceph-users] trouble with ceph-deploy

2013-09-10 Thread Pavel Timoschenkov
OSD created only if I use single disk for data and journal.

Situation with separate disks:
1.
ceph-deploy disk zap ceph001:sdaa ceph001:sda1 [ceph_deploy.osd][DEBUG ] 
zapping /dev/sdaa on ceph001 [ceph_deploy.osd][DEBUG ] zapping /dev/sda1 on 
ceph001
2.
Wiped file system on ceph001
wipefs /dev/sdaa
wipefs: WARNING: /dev/sdaa: appears to contain 'gpt' partition table wipefs 
/dev/sdaa1
wipefs: error: /dev/sdaa1: probing initialization failed
3. 
ceph-deploy osd create ceph001:sdaa:/dev/sda1
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
ceph001:/dev/sdaa:/dev/sda1
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph001 [ceph_deploy.osd][DEBUG ] 
Host ceph001 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaa journal 
/dev/sda1 activate True
4.
ceph -k ceph.client.admin.keyring -s
  cluster d4d39e90-9610-41f3-be73-db361908b433
   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
   monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2, quorum 
0 ceph001
   osdmap e1: 0 osds: 0 up, 0 in
pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e1: 0/0/1 up

With single disk:
1.
ceph-deploy disk zap ceph001:sdaa
[ceph_deploy.osd][DEBUG ] zapping /dev/sdaa on ceph001
2.
ceph-deploy osd create ceph001:sdaa
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph001:/dev/sdaa:
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph001 [ceph_deploy.osd][DEBUG ] 
Host ceph001 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaa journal None 
activate True
3.
ceph@ceph-admin:~$ ceph -k ceph.client.admin.keyring -s
  cluster d4d39e90-9610-41f3-be73-db361908b433
   health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
   monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2, quorum 
0 ceph001
   osdmap e2: 1 osds: 0 up, 0 in
pgmap v3: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e1: 0/0/1 up

-Original Message-
From: Sage Weil [mailto:s...@inktank.com] 
Sent: Monday, September 09, 2013 7:09 PM
To: Pavel Timoschenkov
Cc: Alfredo Deza; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] trouble with ceph-deploy

If you manually use wipefs to clear out the fs signatures after you zap, does 
it work then?

I've opened http://tracker.ceph.com/issues/6258 as I think that is the answer 
here, but if you could confirm that wipefs does in fact solve the problem, that 
would be helpful!

Thanks-
sage


On Mon, 9 Sep 2013, Pavel Timoschenkov wrote:

 for the experiment:
 
 - blank disk sdae for data
 
 blkid -p /dev/sdaf
 /dev/sdaf: PTTYPE=gpt
 
 - and sda4 partition for journal
 
 blkid -p /dev/sda4
 /dev/sda4: PTTYPE=gpt PART_ENTRY_SCHEME=gpt PART_ENTRY_NAME=Linux 
 filesystem PART_ENTRY_UUID=cdc46436-b6ed-40bb-adb4-63cf1c41cbe3 
 PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4 PART_ENTRY_NUMBER=4 
 PART_ENTRY_OFFSET=62916608 PART_ENTRY_SIZE=20971520 PART_ENTRY_DISK=8:0
 
 - zapped disk
 
 ceph-deploy disk zap ceph001:sdaf ceph001:sda4 [ceph_deploy.osd][DEBUG 
 ] zapping /dev/sdaf on ceph001 [ceph_deploy.osd][DEBUG ] zapping 
 /dev/sda4 on ceph001
 
 - after this:
 
 ceph-deploy osd create ceph001:sdae:/dev/sda4 [ceph_deploy.osd][DEBUG 
 ] Preparing cluster ceph disks ceph001:/dev/sdaf:/dev/sda4 
 [ceph_deploy.osd][DEBUG ] Deploying osd to ceph001 
 [ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use.
 [ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaf 
 journal /dev/sda4 activate True
 
 
 - after this:
 
 blkid -p /dev/sdaf1
 /dev/sdaf1: ambivalent result (probably more filesystems on the 
 device, use wipefs(8) to see more details)
 
 wipefs /dev/sdaf1
 offset   type
 
 0x3  zfs_member   [raid]
 
 0x0  xfs   [filesystem]
  UUID:  aba50262-0427-4f8b-8eb9-513814af6b81
 
 - and OSD not created
 
 but if I'm using sungle disk for data and journal:
 
 ceph-deploy disk zap ceph001:sdaf
 [ceph_deploy.osd][DEBUG ] zapping /dev/sdaf on ceph001
 
 ceph-deploy osd create ceph001:sdaf
 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph001:/dev/sdaf:
 [ceph_deploy.osd][DEBUG ] Deploying osd to ceph001 
 [ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use.
 [ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaf 
 journal None activate True
 
 OSD created!
 
 -Original Message-
 From: Sage Weil [mailto:s...@inktank.com]
 Sent: Friday, September 06, 2013 6:41 PM
 To: Pavel Timoschenkov
 Cc: Alfredo Deza; ceph-users@lists.ceph.com
 Subject: RE: [ceph-users] trouble with ceph-deploy
 
 On Fri, 6 Sep 2013, Pavel Timoschenkov wrote:
  Try
  ceph-disk -v activate /dev/sdaa1
  
  ceph-disk -v activate /dev/sdaa1
  /dev/sdaa1: ambivalent result (probably more filesystems on the 
  device, use wipefs(8) to see more details

Re: [ceph-users] trouble with ceph-deploy

2013-09-09 Thread Pavel Timoschenkov
for the experiment:

- blank disk sdae for data

blkid -p /dev/sdaf
/dev/sdaf: PTTYPE=gpt

- and sda4 partition for journal

blkid -p /dev/sda4
/dev/sda4: PTTYPE=gpt PART_ENTRY_SCHEME=gpt PART_ENTRY_NAME=Linux 
filesystem PART_ENTRY_UUID=cdc46436-b6ed-40bb-adb4-63cf1c41cbe3 
PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4 PART_ENTRY_NUMBER=4 
PART_ENTRY_OFFSET=62916608 PART_ENTRY_SIZE=20971520 PART_ENTRY_DISK=8:0

- zapped disk 

ceph-deploy disk zap ceph001:sdaf ceph001:sda4
[ceph_deploy.osd][DEBUG ] zapping /dev/sdaf on ceph001
[ceph_deploy.osd][DEBUG ] zapping /dev/sda4 on ceph001

- after this:

ceph-deploy osd create ceph001:sdae:/dev/sda4
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
ceph001:/dev/sdaf:/dev/sda4
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph001
[ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaf journal 
/dev/sda4 activate True


- after this:

blkid -p /dev/sdaf1
/dev/sdaf1: ambivalent result (probably more filesystems on the device, use 
wipefs(8) to see more details)

wipefs /dev/sdaf1
offset   type

0x3  zfs_member   [raid]

0x0  xfs   [filesystem]
 UUID:  aba50262-0427-4f8b-8eb9-513814af6b81

- and OSD not created

but if I'm using sungle disk for data and journal:

ceph-deploy disk zap ceph001:sdaf
[ceph_deploy.osd][DEBUG ] zapping /dev/sdaf on ceph001

ceph-deploy osd create ceph001:sdaf
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph001:/dev/sdaf:
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph001
[ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaf journal None 
activate True

OSD created!

-Original Message-
From: Sage Weil [mailto:s...@inktank.com] 
Sent: Friday, September 06, 2013 6:41 PM
To: Pavel Timoschenkov
Cc: Alfredo Deza; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] trouble with ceph-deploy

On Fri, 6 Sep 2013, Pavel Timoschenkov wrote:
 Try
 ceph-disk -v activate /dev/sdaa1
 
 ceph-disk -v activate /dev/sdaa1
 /dev/sdaa1: ambivalent result (probably more filesystems on the 
 device, use wipefs(8) to see more details)

Looks like thre are multiple fs signatures on that partition.  See

http://ozancaglayan.com/2013/01/29/multiple-filesystem-signatures-on-a-partition/

for how to clean that up.  And please share the wipefs output that you see; it 
may be that we need to make the --zap-disk behavior also explicitly clear any 
signatures on the device.

Thanks!
sage


 as there is probably a partition there.  And/or tell us what 
 /proc/partitions contains,
 
 cat /proc/partitions
 major minor  #blocks  name
 
 65  160 2930266584 sdaa
   65  161 2930265543 sdaa1
 
 and/or what you get from
 ceph-disk list
 
 ceph-disk list
 Traceback (most recent call last):
   File /usr/sbin/ceph-disk, line 2328, in module
 main()
   File /usr/sbin/ceph-disk, line 2317, in main
 args.func(args)
   File /usr/sbin/ceph-disk, line 2001, in main_list
 tpath = mount(dev=dev, fstype=fs_type, options='')
   File /usr/sbin/ceph-disk, line 678, in mount
 path,
   File /usr/lib/python2.7/subprocess.py, line 506, in check_call
 retcode = call(*popenargs, **kwargs)
   File /usr/lib/python2.7/subprocess.py, line 493, in call
 return Popen(*popenargs, **kwargs).wait()
   File /usr/lib/python2.7/subprocess.py, line 679, in __init__
 errread, errwrite)
   File /usr/lib/python2.7/subprocess.py, line 1249, in _execute_child
 raise child_exception
 TypeError: execv() arg 2 must contain only strings
 
 ==
 -Original Message-
 From: Sage Weil [mailto:s...@inktank.com]
 Sent: Thursday, September 05, 2013 6:37 PM
 To: Pavel Timoschenkov
 Cc: Alfredo Deza; ceph-users@lists.ceph.com
 Subject: RE: [ceph-users] trouble with ceph-deploy
 
 On Thu, 5 Sep 2013, Pavel Timoschenkov wrote:
  What happens if you do
  ceph-disk -v activate /dev/sdaa1
  on ceph001?
  
  Hi. My issue has not been solved. When i execute ceph-disk -v activate 
  /dev/sdaa - all is ok:
  ceph-disk -v activate /dev/sdaa
 
 Try
 
  ceph-disk -v activate /dev/sdaa1
 
 as there is probably a partition there.  And/or tell us what 
 /proc/partitions contains, and/or what you get from
 
  ceph-disk list
 
 Thanks!
 sage
 
 
  DEBUG:ceph-disk:Mounting /dev/sdaa on /var/lib/ceph/tmp/mnt.yQuXIa 
  with options noatime
  mount: Structure needs cleaning
  but OSD not created all the same:
  ceph -k ceph.client.admin.keyring -s
cluster 0a2e18d2-fd53-4f01-b63a-84851576c076
 health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
 monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2, 
  quorum 0 ceph001
 osdmap e1: 0 osds: 0 up, 0 in
  pgmap v2: 192 pgs: 192 creating

Re: [ceph-users] trouble with ceph-deploy

2013-09-06 Thread Pavel Timoschenkov
Try
ceph-disk -v activate /dev/sdaa1

ceph-disk -v activate /dev/sdaa1
/dev/sdaa1: ambivalent result (probably more filesystems on the device, use 
wipefs(8) to see more details)

as there is probably a partition there.  And/or tell us what 
/proc/partitions contains, 

cat /proc/partitions
major minor  #blocks  name

65  160 2930266584 sdaa
  65  161 2930265543 sdaa1

and/or what you get from
ceph-disk list

ceph-disk list
Traceback (most recent call last):
  File /usr/sbin/ceph-disk, line 2328, in module
main()
  File /usr/sbin/ceph-disk, line 2317, in main
args.func(args)
  File /usr/sbin/ceph-disk, line 2001, in main_list
tpath = mount(dev=dev, fstype=fs_type, options='')
  File /usr/sbin/ceph-disk, line 678, in mount
path,
  File /usr/lib/python2.7/subprocess.py, line 506, in check_call
retcode = call(*popenargs, **kwargs)
  File /usr/lib/python2.7/subprocess.py, line 493, in call
return Popen(*popenargs, **kwargs).wait()
  File /usr/lib/python2.7/subprocess.py, line 679, in __init__
errread, errwrite)
  File /usr/lib/python2.7/subprocess.py, line 1249, in _execute_child
raise child_exception
TypeError: execv() arg 2 must contain only strings

==
-Original Message-
From: Sage Weil [mailto:s...@inktank.com] 
Sent: Thursday, September 05, 2013 6:37 PM
To: Pavel Timoschenkov
Cc: Alfredo Deza; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] trouble with ceph-deploy

On Thu, 5 Sep 2013, Pavel Timoschenkov wrote:
 What happens if you do
 ceph-disk -v activate /dev/sdaa1
 on ceph001?
 
 Hi. My issue has not been solved. When i execute ceph-disk -v activate 
 /dev/sdaa - all is ok:
 ceph-disk -v activate /dev/sdaa

Try

 ceph-disk -v activate /dev/sdaa1

as there is probably a partition there.  And/or tell us what /proc/partitions 
contains, and/or what you get from

 ceph-disk list

Thanks!
sage


 DEBUG:ceph-disk:Mounting /dev/sdaa on /var/lib/ceph/tmp/mnt.yQuXIa 
 with options noatime
 mount: Structure needs cleaning
 but OSD not created all the same:
 ceph -k ceph.client.admin.keyring -s
   cluster 0a2e18d2-fd53-4f01-b63a-84851576c076
health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2, 
 quorum 0 ceph001
osdmap e1: 0 osds: 0 up, 0 in
 pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB 
 avail
mdsmap e1: 0/0/1 up
 
 -Original Message-
 From: Sage Weil [mailto:s...@inktank.com]
 Sent: Friday, August 30, 2013 6:14 PM
 To: Pavel Timoschenkov
 Cc: Alfredo Deza; ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] trouble with ceph-deploy
 
 On Fri, 30 Aug 2013, Pavel Timoschenkov wrote:
 
  
  Can you share the output of the commands that do not work for you? 
  How did `create` not work ? what did you see in the logs?
  
   
  
  In logs everything looks good. After
  
  ceph-deploy disk zap ceph001:sdaa ceph001:sda1
  
  and
  
  ceph-deploy osd create ceph001:sdaa:/dev/sda1
  
  where:
  
  HOST: ceph001
  
  DISK: sdaa
  
  JOURNAL: /dev/sda1
  
  in log:
  
  ==
  
  cat ceph.log
  
  2013-08-30 13:06:42,030 [ceph_deploy.osd][DEBUG ] Preparing cluster 
  ceph disks ceph001:/dev/sdaa:/dev/sda1
  
  2013-08-30 13:06:42,590 [ceph_deploy.osd][DEBUG ] Deploying osd to
  ceph001
  
  2013-08-30 13:06:42,627 [ceph_deploy.osd][DEBUG ] Host ceph001 is 
  now ready for osd use.
  
  2013-08-30 13:06:42,627 [ceph_deploy.osd][DEBUG ] Preparing host
  ceph001 disk /dev/sdaa journal /dev/sda1 activate True
  
  +++
  
  But:
  
  +++
  
  ceph -k ceph.client.admin.keyring -s
  
    cluster 0a2e18d2-fd53-4f01-b63a-84851576c076
  
     health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; 
  no osds
  
     monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 
  2, quorum 0 ceph001
  
     osdmap e1: 0 osds: 0 up, 0 in
  
      pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 
  0 KB avail
  
     mdsmap e1: 0/0/1 up
  
  +++
  
  And
  
  +++
  
  ceph -k ceph.client.admin.keyring osd tree
  
  # id    weight  type name   up/down reweight
  
  -1  0   root default
  
  +++
  
  OSD not created (
 
 What happens if you do
 
  ceph-disk -v activate /dev/sdaa1
 
 on ceph001?
 
 sage
 
 
  
   
  
  From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
  Sent: Thursday, August 29, 2013 5:41 PM
  To: Pavel Timoschenkov
  Cc: ceph-users@lists.ceph.com
  Subject: Re: [ceph-users] trouble with ceph-deploy
  
   
  
   
  
   
  
  On Thu, Aug 29, 2013 at 10:23 AM, Pavel Timoschenkov 
  pa...@bayonetteas.onmicrosoft.com wrote:
  
Hi.
  
If I use

Re: [ceph-users] trouble with ceph-deploy

2013-09-05 Thread Pavel Timoschenkov
What happens if you do
ceph-disk -v activate /dev/sdaa1
on ceph001?

Hi. My issue has not been solved. When i execute ceph-disk -v activate 
/dev/sdaa - all is ok:
ceph-disk -v activate /dev/sdaa
DEBUG:ceph-disk:Mounting /dev/sdaa on /var/lib/ceph/tmp/mnt.yQuXIa with options 
noatime
mount: Structure needs cleaning
but OSD not created all the same:
ceph -k ceph.client.admin.keyring -s
  cluster 0a2e18d2-fd53-4f01-b63a-84851576c076
   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
   monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2, quorum 
0 ceph001
   osdmap e1: 0 osds: 0 up, 0 in
pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e1: 0/0/1 up 

-Original Message-
From: Sage Weil [mailto:s...@inktank.com] 
Sent: Friday, August 30, 2013 6:14 PM
To: Pavel Timoschenkov
Cc: Alfredo Deza; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] trouble with ceph-deploy

On Fri, 30 Aug 2013, Pavel Timoschenkov wrote:

 
 Can you share the output of the commands that do not work for you? 
 How did `create` not work ? what did you see in the logs?
 
  
 
 In logs everything looks good. After
 
 ceph-deploy disk zap ceph001:sdaa ceph001:sda1
 
 and
 
 ceph-deploy osd create ceph001:sdaa:/dev/sda1
 
 where:
 
 HOST: ceph001
 
 DISK: sdaa
 
 JOURNAL: /dev/sda1
 
 in log:
 
 ==
 
 cat ceph.log
 
 2013-08-30 13:06:42,030 [ceph_deploy.osd][DEBUG ] Preparing cluster 
 ceph disks ceph001:/dev/sdaa:/dev/sda1
 
 2013-08-30 13:06:42,590 [ceph_deploy.osd][DEBUG ] Deploying osd to 
 ceph001
 
 2013-08-30 13:06:42,627 [ceph_deploy.osd][DEBUG ] Host ceph001 is now 
 ready for osd use.
 
 2013-08-30 13:06:42,627 [ceph_deploy.osd][DEBUG ] Preparing host 
 ceph001 disk /dev/sdaa journal /dev/sda1 activate True
 
 +++
 
 But:
 
 +++
 
 ceph -k ceph.client.admin.keyring -s
 
   cluster 0a2e18d2-fd53-4f01-b63a-84851576c076
 
    health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no 
 osds
 
    monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 
 2, quorum 0 ceph001
 
    osdmap e1: 0 osds: 0 up, 0 in
 
     pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 
 KB avail
 
    mdsmap e1: 0/0/1 up
 
 +++
 
 And
 
 +++
 
 ceph -k ceph.client.admin.keyring osd tree
 
 # id    weight  type name   up/down reweight
 
 -1  0   root default
 
 +++
 
 OSD not created (

What happens if you do

 ceph-disk -v activate /dev/sdaa1

on ceph001?

sage


 
  
 
 From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
 Sent: Thursday, August 29, 2013 5:41 PM
 To: Pavel Timoschenkov
 Cc: ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] trouble with ceph-deploy
 
  
 
  
 
  
 
 On Thu, Aug 29, 2013 at 10:23 AM, Pavel Timoschenkov 
 pa...@bayonetteas.onmicrosoft.com wrote:
 
   Hi.
 
   If I use the example of the doc:
   
 http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/#create-o
 sds
 
   ceph-deploy osd prepare ceph001:sdaa:/dev/sda1
   ceph-deploy osd activate ceph001:sdaa:/dev/sda1
   or
   ceph-deploy osd prepare ceph001:/dev/sdaa1:/dev/sda1
   ceph-deploy osd activate ceph001:/dev/sdaa:/dev/sda1
 
 or
 
 ceph-deploy osd create ceph001:sdaa:/dev/sda1
 
 OSD is not created. No errors, but when I execute
 
 ceph -k ceph.client.admin.keyring ?s
 
 I see the following:
 
 cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
    health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no 
 osds
    monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 
 2, quorum 0 ceph001
    osdmap e1: 0 osds: 0 up, 0 in
     pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 
 KB avail
    mdsmap e1: 0/0/1 up
 
  
 
 0 OSD.
 
  
 
 But if I use as an DISK argument to a local folder
 (/var/lib/ceph/osd/osd001) - it works, but only if used prepare + 
 activate construction:
 
 ceph-deploy osd prepare ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
 ceph-deploy osd activate ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
 
 If I use CREATE, OSD is not created also.
 
  
 
  
 
 From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
 Sent: Thursday, August 29, 2013 4:36 PM
 To: Pavel Timoschenkov
 Cc: ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] trouble with ceph-deploy
 
  
 
  
 
  
 
 On Thu, Aug 29, 2013 at 8:00 AM, Pavel Timoschenkov 
 pa...@bayonetteas.onmicrosoft.com wrote:
 
   Hi.
   New trouble with ceph-deploy. When i'm executing:
 
   ceph-deploy osd prepare ceph001:sdaa:/dev/sda1
   ceph-deploy osd activate ceph001:sdaa:/dev/sda1
   or
   ceph-deploy osd prepare ceph001:/dev/sdaa1:/dev/sda1
   ceph-deploy osd activate ceph001:/dev/sdaa:/dev/sda1
 
  
 
 Have

Re: [ceph-users] trouble with ceph-deploy

2013-08-30 Thread Pavel Timoschenkov
Can you share the output of the commands that do not work for you? How 
did `create` not work ? what did you see in the logs?

In logs everything looks good. After
ceph-deploy disk zap ceph001:sdaa ceph001:sda1
and
ceph-deploy osd create ceph001:sdaa:/dev/sda1
where:
HOST: ceph001
DISK: sdaa
JOURNAL: /dev/sda1
in log:
==
cat ceph.log
2013-08-30 13:06:42,030 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
ceph001:/dev/sdaa:/dev/sda1
2013-08-30 13:06:42,590 [ceph_deploy.osd][DEBUG ] Deploying osd to ceph001
2013-08-30 13:06:42,627 [ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for 
osd use.
2013-08-30 13:06:42,627 [ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk 
/dev/sdaa journal /dev/sda1 activate True
+++
But:
+++
ceph -k ceph.client.admin.keyring -s
  cluster 0a2e18d2-fd53-4f01-b63a-84851576c076
   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
   monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2, quorum 
0 ceph001
   osdmap e1: 0 osds: 0 up, 0 in
pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e1: 0/0/1 up
+++
And
+++
ceph -k ceph.client.admin.keyring osd tree
# idweight  type name   up/down reweight
-1  0   root default
+++
OSD not created (

From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Thursday, August 29, 2013 5:41 PM
To: Pavel Timoschenkov
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] trouble with ceph-deploy



On Thu, Aug 29, 2013 at 10:23 AM, Pavel Timoschenkov 
pa...@bayonetteas.onmicrosoft.commailto:pa...@bayonetteas.onmicrosoft.com 
wrote:
Hi.
If I use the example of the doc: 
http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/#create-osds
ceph-deploy osd prepare ceph001:sdaa:/dev/sda1
ceph-deploy osd activate ceph001:sdaa:/dev/sda1
or
ceph-deploy osd prepare ceph001:/dev/sdaa1:/dev/sda1
ceph-deploy osd activate ceph001:/dev/sdaa:/dev/sda1
or
ceph-deploy osd create ceph001:sdaa:/dev/sda1
OSD is not created. No errors, but when I execute
ceph -k ceph.client.admin.keyring -s
I see the following:
cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
   monmap e1: 1 mons at 
{ceph001=172.16.4.32:6789/0http://172.16.4.32:6789/0}, election epoch 2, 
quorum 0 ceph001
   osdmap e1: 0 osds: 0 up, 0 in
pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e1: 0/0/1 up

0 OSD.

But if I use as an DISK argument to a local folder (/var/lib/ceph/osd/osd001) - 
it works, but only if used prepare + activate construction:
ceph-deploy osd prepare ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
ceph-deploy osd activate ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
If I use CREATE, OSD is not created also.


From: Alfredo Deza 
[mailto:alfredo.d...@inktank.commailto:alfredo.d...@inktank.com]
Sent: Thursday, August 29, 2013 4:36 PM
To: Pavel Timoschenkov
Cc: ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
Subject: Re: [ceph-users] trouble with ceph-deploy



On Thu, Aug 29, 2013 at 8:00 AM, Pavel Timoschenkov 
pa...@bayonetteas.onmicrosoft.commailto:pa...@bayonetteas.onmicrosoft.com 
wrote:
Hi.
New trouble with ceph-deploy. When i'm executing:

ceph-deploy osd prepare ceph001:sdaa:/dev/sda1
ceph-deploy osd activate ceph001:sdaa:/dev/sda1
or
ceph-deploy osd prepare ceph001:/dev/sdaa1:/dev/sda1
ceph-deploy osd activate ceph001:/dev/sdaa:/dev/sda1

Have you tried with
ceph-deploy osd create ceph001:sdaa:/dev/sda1

?
`create` should do `prepare` and `activate` for you. Also be mindful that the 
requirements for the arguments
are that you need to pass something like:

HOST:DISK[:JOURNAL]
Where JOURNAL is completely optional, this is also detailed here: 
http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/#create-osds
Have you followed those instructions to deploy your OSDs ?


OSD not created:

ceph -k ceph.client.admin.keyring -s
  cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
   monmap e1: 1 mons at 
{ceph001=172.16.4.32:6789/0http://172.16.4.32:6789/0}, election epoch 2, 
quorum 0 ceph001
   osdmap e1: 0 osds: 0 up, 0 in
pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e1: 0/0/1 up

ceph -k ceph.client.admin.keyring osd tree
# idweight  type name   up/down reweight
-1  0   root default

but if i'm creating folder for ceph data and executing:

ceph-deploy osd prepare ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
ceph-deploy osd activate ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
Those do not look right to me.

OSD created:

ceph -k ceph.client.admin.keyring -s

Re: [ceph-users] trouble with ceph-deploy

2013-08-30 Thread Pavel Timoschenkov
What happens if you do
ceph-disk -v activate /dev/sdaa1
on ceph001?

ceph-disk -v activate /dev/sdaa1
/dev/sdaa1: ambivalent result (probably more filesystems on the device, use 
wipefs(8) to see more details)

-Original Message-
From: Sage Weil [mailto:s...@inktank.com] 
Sent: Friday, August 30, 2013 6:14 PM
To: Pavel Timoschenkov
Cc: Alfredo Deza; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] trouble with ceph-deploy

On Fri, 30 Aug 2013, Pavel Timoschenkov wrote:

 
 Can you share the output of the commands that do not work for you? 
 How did `create` not work ? what did you see in the logs?
 
  
 
 In logs everything looks good. After
 
 ceph-deploy disk zap ceph001:sdaa ceph001:sda1
 
 and
 
 ceph-deploy osd create ceph001:sdaa:/dev/sda1
 
 where:
 
 HOST: ceph001
 
 DISK: sdaa
 
 JOURNAL: /dev/sda1
 
 in log:
 
 ==
 
 cat ceph.log
 
 2013-08-30 13:06:42,030 [ceph_deploy.osd][DEBUG ] Preparing cluster 
 ceph disks ceph001:/dev/sdaa:/dev/sda1
 
 2013-08-30 13:06:42,590 [ceph_deploy.osd][DEBUG ] Deploying osd to 
 ceph001
 
 2013-08-30 13:06:42,627 [ceph_deploy.osd][DEBUG ] Host ceph001 is now 
 ready for osd use.
 
 2013-08-30 13:06:42,627 [ceph_deploy.osd][DEBUG ] Preparing host 
 ceph001 disk /dev/sdaa journal /dev/sda1 activate True
 
 +++
 
 But:
 
 +++
 
 ceph -k ceph.client.admin.keyring -s
 
   cluster 0a2e18d2-fd53-4f01-b63a-84851576c076
 
    health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no 
 osds
 
    monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 
 2, quorum 0 ceph001
 
    osdmap e1: 0 osds: 0 up, 0 in
 
     pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 
 KB avail
 
    mdsmap e1: 0/0/1 up
 
 +++
 
 And
 
 +++
 
 ceph -k ceph.client.admin.keyring osd tree
 
 # id    weight  type name   up/down reweight
 
 -1  0   root default
 
 +++
 
 OSD not created (

What happens if you do

 ceph-disk -v activate /dev/sdaa1

on ceph001?

sage


 
  
 
 From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
 Sent: Thursday, August 29, 2013 5:41 PM
 To: Pavel Timoschenkov
 Cc: ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] trouble with ceph-deploy
 
  
 
  
 
  
 
 On Thu, Aug 29, 2013 at 10:23 AM, Pavel Timoschenkov 
 pa...@bayonetteas.onmicrosoft.com wrote:
 
   Hi.
 
   If I use the example of the doc:
   
 http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/#create-o
 sds
 
   ceph-deploy osd prepare ceph001:sdaa:/dev/sda1
   ceph-deploy osd activate ceph001:sdaa:/dev/sda1
   or
   ceph-deploy osd prepare ceph001:/dev/sdaa1:/dev/sda1
   ceph-deploy osd activate ceph001:/dev/sdaa:/dev/sda1
 
 or
 
 ceph-deploy osd create ceph001:sdaa:/dev/sda1
 
 OSD is not created. No errors, but when I execute
 
 ceph -k ceph.client.admin.keyring ?s
 
 I see the following:
 
 cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
    health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no 
 osds
    monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 
 2, quorum 0 ceph001
    osdmap e1: 0 osds: 0 up, 0 in
     pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 
 KB avail
    mdsmap e1: 0/0/1 up
 
  
 
 0 OSD.
 
  
 
 But if I use as an DISK argument to a local folder
 (/var/lib/ceph/osd/osd001) - it works, but only if used prepare + 
 activate construction:
 
 ceph-deploy osd prepare ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
 ceph-deploy osd activate ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
 
 If I use CREATE, OSD is not created also.
 
  
 
  
 
 From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
 Sent: Thursday, August 29, 2013 4:36 PM
 To: Pavel Timoschenkov
 Cc: ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] trouble with ceph-deploy
 
  
 
  
 
  
 
 On Thu, Aug 29, 2013 at 8:00 AM, Pavel Timoschenkov 
 pa...@bayonetteas.onmicrosoft.com wrote:
 
   Hi.
   New trouble with ceph-deploy. When i'm executing:
 
   ceph-deploy osd prepare ceph001:sdaa:/dev/sda1
   ceph-deploy osd activate ceph001:sdaa:/dev/sda1
   or
   ceph-deploy osd prepare ceph001:/dev/sdaa1:/dev/sda1
   ceph-deploy osd activate ceph001:/dev/sdaa:/dev/sda1
 
  
 
 Have you tried with
 
     ceph-deploy osd create ceph001:sdaa:/dev/sda1
 
 ?
 
 `create` should do `prepare` and `activate` for you. Also be mindful 
 that the requirements for the arguments are that you need to pass 
 something like:
 
     HOST:DISK[:JOURNAL]
 
 Where JOURNAL is completely optional, this is also detailed here:
 http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/#create-o
 sds
 
 Have you followed those instructions to deploy your OSDs ?
 
  
 
 
   OSD not created:
 
   ceph -k

Re: [ceph-users] trouble with ceph-deploy

2013-08-29 Thread Pavel Timoschenkov
Hi.
If I use the example of the doc: 
http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/#create-osds
ceph-deploy osd prepare ceph001:sdaa:/dev/sda1
ceph-deploy osd activate ceph001:sdaa:/dev/sda1
or
ceph-deploy osd prepare ceph001:/dev/sdaa1:/dev/sda1
ceph-deploy osd activate ceph001:/dev/sdaa:/dev/sda1
or
ceph-deploy osd create ceph001:sdaa:/dev/sda1
OSD is not created. No errors, but when I execute
ceph -k ceph.client.admin.keyring -s
I see the following:
cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
   monmap e1: 1 mons at 
{ceph001=172.16.4.32:6789/0http://172.16.4.32:6789/0}, election epoch 2, 
quorum 0 ceph001
   osdmap e1: 0 osds: 0 up, 0 in
pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e1: 0/0/1 up

0 OSD.

But if I use as an DISK argument to a local folder (/var/lib/ceph/osd/osd001) - 
it works, but only if used prepare + activate construction:
ceph-deploy osd prepare ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
ceph-deploy osd activate ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
If I use CREATE, OSD is not created also.


From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Thursday, August 29, 2013 4:36 PM
To: Pavel Timoschenkov
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] trouble with ceph-deploy



On Thu, Aug 29, 2013 at 8:00 AM, Pavel Timoschenkov 
pa...@bayonetteas.onmicrosoft.commailto:pa...@bayonetteas.onmicrosoft.com 
wrote:
Hi.
New trouble with ceph-deploy. When i'm executing:

ceph-deploy osd prepare ceph001:sdaa:/dev/sda1
ceph-deploy osd activate ceph001:sdaa:/dev/sda1
or
ceph-deploy osd prepare ceph001:/dev/sdaa1:/dev/sda1
ceph-deploy osd activate ceph001:/dev/sdaa:/dev/sda1

Have you tried with
ceph-deploy osd create ceph001:sdaa:/dev/sda1

?
`create` should do `prepare` and `activate` for you. Also be mindful that the 
requirements for the arguments
are that you need to pass something like:

HOST:DISK[:JOURNAL]
Where JOURNAL is completely optional, this is also detailed here: 
http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/#create-osds
Have you followed those instructions to deploy your OSDs ?


OSD not created:

ceph -k ceph.client.admin.keyring -s
  cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
   monmap e1: 1 mons at 
{ceph001=172.16.4.32:6789/0http://172.16.4.32:6789/0}, election epoch 2, 
quorum 0 ceph001
   osdmap e1: 0 osds: 0 up, 0 in
pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e1: 0/0/1 up

ceph -k ceph.client.admin.keyring osd tree
# idweight  type name   up/down reweight
-1  0   root default

but if i'm creating folder for ceph data and executing:

ceph-deploy osd prepare ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
ceph-deploy osd activate ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
Those do not look right to me.

OSD created:

ceph -k ceph.client.admin.keyring -s
  cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
   health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
   monmap e1: 1 mons at 
{ceph001=172.16.4.32:6789/0http://172.16.4.32:6789/0}, election epoch 2, 
quorum 0 ceph001
   osdmap e5: 1 osds: 1 up, 1 in
pgmap v6: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e1: 0/0/1 up

ceph -k ceph.client.admin.keyring osd tree
# idweight  type name   up/down reweight
-1  0.03999 root default
-2  0.03999 host ceph001
0   0.03999 osd.0   up  1

This is a bug or should I mount disks for data to some catalog?


and more:
The 'ceph-deploy osd create' construction don't work from me. Only 
'prepareactivate'.

When you say `create` didn't work for you, how so? What output did you see? Can 
you share some logs/output?

dpkg -s ceph-deploy
Version: 1.2.1-1precise

___
ceph-users mailing list
ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy and journal on separate disk

2013-08-22 Thread Pavel Timoschenkov
Hi.
With this patch - is all ok.
Thanks for help!

-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com] 
Sent: Wednesday, August 21, 2013 7:16 PM
To: Pavel Timoschenkov
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] ceph-deploy and journal on separate disk

On Wed, Aug 21, 2013 at 9:33 AM, Pavel Timoschenkov 
pa...@bayonetteas.onmicrosoft.com wrote:
 Hi. Thanks for patch. But after patched ceph src and install it, I have not 
 ceph-disk or ceph-deploy command.
 I did the following steps:
 git clone --recursive https://github.com/ceph/ceph.git patch -p0  
 patch name ./autogen.sh ./configure make make install What am I 
 doing wrong?

Oh I meant to patch it directly, there was no need to rebuild/make/install 
again because the file is a plain Python file (no compilation needed).

Can you try that instead?

 -Original Message-
 From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
 Sent: Monday, August 19, 2013 3:38 PM
 To: Pavel Timoschenkov
 Cc: ceph-us...@ceph.com
 Subject: Re: [ceph-users] ceph-deploy and journal on separate disk

 On Fri, Aug 16, 2013 at 8:32 AM, Pavel Timoschenkov 
 pa...@bayonetteas.onmicrosoft.com wrote:
 I suspect that there are left over partitions in /dev/sdaa that 
 are causing this to fail, I *think* that we could pass the `-t` 
 flag with the filesystem and prevent this.

 Hi. Any changes (

 Can you create a build that passes the -t flag with mount?


 I tried going through these steps again and could not get any other ideas 
 except to pass in that flag for mounting. Would you be willing to try a patch?
 (http://fpaste.org/33099/37691580/)

 You would need to apply it to the `ceph-disk` executable.








 From: Pavel Timoschenkov
 Sent: Thursday, August 15, 2013 3:43 PM
 To: 'Alfredo Deza'
 Cc: Samuel Just; ceph-us...@ceph.com
 Subject: RE: [ceph-users] ceph-deploy and journal on separate disk



 The separate commands (e.g. `ceph-disk -v prepare /dev/sda1`) works 
 because then the journal is on the same device as the OSD data, so 
 the execution is different to get them to a working state.

 I suspect that there are left over partitions in /dev/sdaa that are 
 causing this to fail, I *think* that we could pass the `-t` flag with 
 the filesystem and prevent this.

 Just to be sure, could you list all the partitions on /dev/sdaa (if 
 /dev/sdaa is the whole device)?

 Something like:

 sudo parted /dev/sdaa print

 Or if you prefer any other way that could tell use what are all the 
 partitions in that device.





 After

 ceph-deploy disk zap ceph001:sdaa ceph001:sda1



 root@ceph001:~# parted /dev/sdaa print

 Model: ATA ST3000DM001-1CH1 (scsi)

 Disk /dev/sdaa: 3001GB

 Sector size (logical/physical): 512B/4096B

 Partition Table: gpt



 Number  Start  End  Size  File system  Name  Flags



 root@ceph001:~# parted /dev/sda1 print

 Model: Unknown (unknown)

 Disk /dev/sda1: 10.7GB

 Sector size (logical/physical): 512B/512B

 Partition Table: gpt

 So that is after running `disk zap`. What does it say after using 
 ceph-deploy and failing?



 Number  Start  End  Size  File system  Name  Flags



 After ceph-disk -v prepare /dev/sdaa /dev/sda1:



 root@ceph001:~# parted /dev/sdaa print

 Model: ATA ST3000DM001-1CH1 (scsi)

 Disk /dev/sdaa: 3001GB

 Sector size (logical/physical): 512B/4096B

 Partition Table: gpt



 Number  Start   End SizeFile system  Name   Flags

 1  1049kB  3001GB  3001GB  xfs  ceph data



 And



 root@ceph001:~# parted /dev/sda1 print

 Model: Unknown (unknown)

 Disk /dev/sda1: 10.7GB

 Sector size (logical/physical): 512B/512B

 Partition Table: gpt



 Number  Start  End  Size  File system  Name  Flags



 With the same errors:



 root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1

 DEBUG:ceph-disk:Journal /dev/sda1 is a partition

 WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the 
 same device as the osd data

 DEBUG:ceph-disk:Creating osd partition on /dev/sdaa

 Information: Moved requested sector from 34 to 2048 in

 order to align on 2048-sector boundaries.

 The operation has completed successfully.

 DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1

 meta-data=/dev/sdaa1 isize=2048   agcount=32, agsize=22892700
 blks

  =   sectsz=512   attr=2, projid32bit=0

 data =   bsize=4096   blocks=732566385, imaxpct=5

  =   sunit=0  swidth=0 blks

 naming   =version 2  bsize=4096   ascii-ci=0

 log  =internal log   bsize=4096   blocks=357698, version=2

  =   sectsz=512   sunit=0 blks, lazy-count=1

 realtime =none   extsz=4096   blocks=0, rtextents=0

 DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.UkJbwx 
 with options noatime

 mount: /dev/sdaa1: more filesystems detected. This should not happen,

use -t type to explicitly specify the filesystem type

Re: [ceph-users] ceph-deploy and journal on separate disk

2013-08-21 Thread Pavel Timoschenkov
Hi. Thanks for patch. But after patched ceph src and install it, I have not 
ceph-disk or ceph-deploy command.
I did the following steps:
git clone --recursive https://github.com/ceph/ceph.git
patch -p0  patch name
./autogen.sh
./configure
make
make install
What am I doing wrong?

-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com] 
Sent: Monday, August 19, 2013 3:38 PM
To: Pavel Timoschenkov
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] ceph-deploy and journal on separate disk

On Fri, Aug 16, 2013 at 8:32 AM, Pavel Timoschenkov 
pa...@bayonetteas.onmicrosoft.com wrote:
 I suspect that there are left over partitions in /dev/sdaa that are 
 causing this to fail, I *think* that we could pass the `-t` flag 
 with the filesystem and prevent this.

 Hi. Any changes (

 Can you create a build that passes the -t flag with mount?


I tried going through these steps again and could not get any other ideas 
except to pass in that flag for mounting. Would you be willing to try a patch?
(http://fpaste.org/33099/37691580/)

You would need to apply it to the `ceph-disk` executable.








 From: Pavel Timoschenkov
 Sent: Thursday, August 15, 2013 3:43 PM
 To: 'Alfredo Deza'
 Cc: Samuel Just; ceph-us...@ceph.com
 Subject: RE: [ceph-users] ceph-deploy and journal on separate disk



 The separate commands (e.g. `ceph-disk -v prepare /dev/sda1`) works 
 because then the journal is on the same device as the OSD data, so the 
 execution is different to get them to a working state.

 I suspect that there are left over partitions in /dev/sdaa that are 
 causing this to fail, I *think* that we could pass the `-t` flag with 
 the filesystem and prevent this.

 Just to be sure, could you list all the partitions on /dev/sdaa (if 
 /dev/sdaa is the whole device)?

 Something like:

 sudo parted /dev/sdaa print

 Or if you prefer any other way that could tell use what are all the 
 partitions in that device.





 After

 ceph-deploy disk zap ceph001:sdaa ceph001:sda1



 root@ceph001:~# parted /dev/sdaa print

 Model: ATA ST3000DM001-1CH1 (scsi)

 Disk /dev/sdaa: 3001GB

 Sector size (logical/physical): 512B/4096B

 Partition Table: gpt



 Number  Start  End  Size  File system  Name  Flags



 root@ceph001:~# parted /dev/sda1 print

 Model: Unknown (unknown)

 Disk /dev/sda1: 10.7GB

 Sector size (logical/physical): 512B/512B

 Partition Table: gpt

 So that is after running `disk zap`. What does it say after using 
 ceph-deploy and failing?



 Number  Start  End  Size  File system  Name  Flags



 After ceph-disk -v prepare /dev/sdaa /dev/sda1:



 root@ceph001:~# parted /dev/sdaa print

 Model: ATA ST3000DM001-1CH1 (scsi)

 Disk /dev/sdaa: 3001GB

 Sector size (logical/physical): 512B/4096B

 Partition Table: gpt



 Number  Start   End SizeFile system  Name   Flags

 1  1049kB  3001GB  3001GB  xfs  ceph data



 And



 root@ceph001:~# parted /dev/sda1 print

 Model: Unknown (unknown)

 Disk /dev/sda1: 10.7GB

 Sector size (logical/physical): 512B/512B

 Partition Table: gpt



 Number  Start  End  Size  File system  Name  Flags



 With the same errors:



 root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1

 DEBUG:ceph-disk:Journal /dev/sda1 is a partition

 WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the 
 same device as the osd data

 DEBUG:ceph-disk:Creating osd partition on /dev/sdaa

 Information: Moved requested sector from 34 to 2048 in

 order to align on 2048-sector boundaries.

 The operation has completed successfully.

 DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1

 meta-data=/dev/sdaa1 isize=2048   agcount=32, agsize=22892700
 blks

  =   sectsz=512   attr=2, projid32bit=0

 data =   bsize=4096   blocks=732566385, imaxpct=5

  =   sunit=0  swidth=0 blks

 naming   =version 2  bsize=4096   ascii-ci=0

 log  =internal log   bsize=4096   blocks=357698, version=2

  =   sectsz=512   sunit=0 blks, lazy-count=1

 realtime =none   extsz=4096   blocks=0, rtextents=0

 DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.UkJbwx 
 with options noatime

 mount: /dev/sdaa1: more filesystems detected. This should not happen,

use -t type to explicitly specify the filesystem type or

use wipefs(8) to clean up the device.



 mount: you must specify the filesystem type

 ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 
 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.UkJbwx']' 
 returned non-zero exit status 32


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy and journal on separate disk

2013-08-16 Thread Pavel Timoschenkov
So that is after running `disk zap`. What does it say after using 
ceph-deploy and failing?

After ceph-disk -v prepare /dev/sdaa /dev/sda1:

root@ceph001:~# parted /dev/sdaa print
Model: ATA ST3000DM001-1CH1 (scsi)
Disk /dev/sdaa: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End SizeFile system  Name   Flags
1  1049kB  3001GB  3001GB  xfs  ceph data

And

root@ceph001:~# parted /dev/sda1 print
Model: Unknown (unknown)
Disk /dev/sda1: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End  Size  File system  Name  Flags

With the same errors:

root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1
DEBUG:ceph-disk:Journal /dev/sda1 is a partition
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same 
device as the osd data
DEBUG:ceph-disk:Creating osd partition on /dev/sdaa
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1
meta-data=/dev/sdaa1 isize=2048   agcount=32, agsize=22892700 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=732566385, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=357698, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.UkJbwx with 
options noatime
mount: /dev/sdaa1: more filesystems detected. This should not happen,
   use -t type to explicitly specify the filesystem type or
   use wipefs(8) to clean up the device.

mount: you must specify the filesystem type
ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', 
'--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.UkJbwx']' returned non-zero exit 
status 32


From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Wednesday, August 14, 2013 7:44 PM
To: Pavel Timoschenkov
Cc: Samuel Just; ceph-us...@ceph.com
Subject: Re: [ceph-users] ceph-deploy and journal on separate disk



On Wed, Aug 14, 2013 at 10:47 AM, Pavel Timoschenkov 
pa...@bayonetteas.onmicrosoft.commailto:pa...@bayonetteas.onmicrosoft.com 
wrote:


From: Alfredo Deza 
[mailto:alfredo.d...@inktank.commailto:alfredo.d...@inktank.com]
Sent: Wednesday, August 14, 2013 5:41 PM
To: Pavel Timoschenkov
Cc: Samuel Just; ceph-us...@ceph.commailto:ceph-us...@ceph.com

Subject: Re: [ceph-users] ceph-deploy and journal on separate disk



On Wed, Aug 14, 2013 at 7:41 AM, Pavel Timoschenkov 
pa...@bayonetteas.onmicrosoft.commailto:pa...@bayonetteas.onmicrosoft.com 
wrote:
It looks like at some point the filesystem is not passed to the options. 
Would you mind running the `ceph-disk-prepare` command again but with
the --verbose flag?
I think that from the output above (correct it if I am mistaken) that would 
be something like:

ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1

Hi.
If I'm running:
ceph-deploy disk zap ceph001:sdaa ceph001:sda1
and
ceph-disk -v prepare /dev/sdaa /dev/sda1, get the same errors:
==
root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1
DEBUG:ceph-disk:Journal /dev/sda1 is a partition
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same 
device as the osd data
DEBUG:ceph-disk:Creating osd partition on /dev/sdaa
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1
meta-data=/dev/sdaa1 isize=2048   agcount=32, agsize=22892700 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=732566385, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=357698, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.EGTIq2 with 
options noatime
mount: /dev/sdaa1: more filesystems detected. This should not happen,
   use -t type to explicitly specify the filesystem type or
   use wipefs(8) to clean up the device.

mount: you must specify the filesystem type
ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', 
'--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.EGTIq2']' returned non-zero exit 
status 32

If executed this command separately for both disks - looks like ok:

For sdaa:

root@ceph001:~# ceph-disk -v prepare /dev

Re: [ceph-users] ceph-deploy and journal on separate disk

2013-08-14 Thread Pavel Timoschenkov
It looks like at some point the filesystem is not passed to the options. 
Would you mind running the `ceph-disk-prepare` command again but with
the --verbose flag?
I think that from the output above (correct it if I am mistaken) that would 
be something like:

ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1

Hi.
If I'm running:
ceph-deploy disk zap ceph001:sdaa ceph001:sda1
and
ceph-disk -v prepare /dev/sdaa /dev/sda1, get the same errors:
==
root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1
DEBUG:ceph-disk:Journal /dev/sda1 is a partition
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same 
device as the osd data
DEBUG:ceph-disk:Creating osd partition on /dev/sdaa
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1
meta-data=/dev/sdaa1 isize=2048   agcount=32, agsize=22892700 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=732566385, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=357698, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.EGTIq2 with 
options noatime
mount: /dev/sdaa1: more filesystems detected. This should not happen,
   use -t type to explicitly specify the filesystem type or
   use wipefs(8) to clean up the device.

mount: you must specify the filesystem type
ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', 
'--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.EGTIq2']' returned non-zero exit 
status 32

If executed this command separately for both disks - looks like ok:

For sdaa:

root@ceph001:~# ceph-disk -v prepare /dev/sdaa
INFO:ceph-disk:Will colocate journal with data on /dev/sdaa
DEBUG:ceph-disk:Creating journal partition num 2 size 1024 on /dev/sdaa
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Journal is GPT partition 
/dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f
DEBUG:ceph-disk:Creating osd partition on /dev/sdaa
Information: Moved requested sector from 2097153 to 2099200 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1
meta-data=/dev/sdaa1 isize=2048   agcount=32, agsize=22884508 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=732304241, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=357570, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.K3q9v5 with 
options noatime
DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.K3q9v5
DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.K3q9v5/journal - 
/dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f
DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.K3q9v5
The operation has completed successfully.
DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdaa

For sda1:

root@ceph001:~# ceph-disk -v prepare /dev/sda1
DEBUG:ceph-disk:OSD data device /dev/sda1 is a partition
DEBUG:ceph-disk:Creating xfs fs on /dev/sda1
meta-data=/dev/sda1  isize=2048   agcount=4, agsize=655360 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=2621440, imaxpct=25
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=2560, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.G30zPD with options 
noatime
DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.G30zPD
DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.G30zPD
DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sda1
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy and journal on separate disk

2013-08-14 Thread Pavel Timoschenkov


From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Wednesday, August 14, 2013 5:41 PM
To: Pavel Timoschenkov
Cc: Samuel Just; ceph-us...@ceph.com
Subject: Re: [ceph-users] ceph-deploy and journal on separate disk



On Wed, Aug 14, 2013 at 7:41 AM, Pavel Timoschenkov 
pa...@bayonetteas.onmicrosoft.commailto:pa...@bayonetteas.onmicrosoft.com 
wrote:
It looks like at some point the filesystem is not passed to the options. 
Would you mind running the `ceph-disk-prepare` command again but with
the --verbose flag?
I think that from the output above (correct it if I am mistaken) that would 
be something like:

ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1

Hi.
If I'm running:
ceph-deploy disk zap ceph001:sdaa ceph001:sda1
and
ceph-disk -v prepare /dev/sdaa /dev/sda1, get the same errors:
==
root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1
DEBUG:ceph-disk:Journal /dev/sda1 is a partition
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same 
device as the osd data
DEBUG:ceph-disk:Creating osd partition on /dev/sdaa
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1
meta-data=/dev/sdaa1 isize=2048   agcount=32, agsize=22892700 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=732566385, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=357698, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.EGTIq2 with 
options noatime
mount: /dev/sdaa1: more filesystems detected. This should not happen,
   use -t type to explicitly specify the filesystem type or
   use wipefs(8) to clean up the device.

mount: you must specify the filesystem type
ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', 
'--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.EGTIq2']' returned non-zero exit 
status 32

If executed this command separately for both disks - looks like ok:

For sdaa:

root@ceph001:~# ceph-disk -v prepare /dev/sdaa
INFO:ceph-disk:Will colocate journal with data on /dev/sdaa
DEBUG:ceph-disk:Creating journal partition num 2 size 1024 on /dev/sdaa
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Journal is GPT partition 
/dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f
DEBUG:ceph-disk:Creating osd partition on /dev/sdaa
Information: Moved requested sector from 2097153 to 2099200 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1
meta-data=/dev/sdaa1 isize=2048   agcount=32, agsize=22884508 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=732304241, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=357570, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.K3q9v5 with 
options noatime
DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.K3q9v5
DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.K3q9v5/journal - 
/dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f
DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.K3q9v5
The operation has completed successfully.
DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdaa

For sda1:

root@ceph001:~# ceph-disk -v prepare /dev/sda1
DEBUG:ceph-disk:OSD data device /dev/sda1 is a partition
DEBUG:ceph-disk:Creating xfs fs on /dev/sda1
meta-data=/dev/sda1  isize=2048   agcount=4, agsize=655360 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=2621440, imaxpct=25
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=2560, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.G30zPD with options 
noatime
DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.G30zPD
DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.G30zPD

Re: [ceph-users] ceph-deploy and journal on separate disk

2013-08-14 Thread Pavel Timoschenkov
It looks like at some point the filesystem is not passed to the options. 
Would you mind running the `ceph-disk-prepare` command again but with
the --verbose flag?
I think that from the output above (correct it if I am mistaken) that would 
be something like:

ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1

Hi.
If I'm running:
ceph-deploy disk zap ceph001:sdaa ceph001:sda1
and
ceph-disk -v prepare /dev/sdaa /dev/sda1, get the same errors:
==
root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1
DEBUG:ceph-disk:Journal /dev/sda1 is a partition
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same 
device as the osd data
DEBUG:ceph-disk:Creating osd partition on /dev/sdaa
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1
meta-data=/dev/sdaa1 isize=2048   agcount=32, agsize=22892700 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=732566385, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=357698, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.EGTIq2 with 
options noatime
mount: /dev/sdaa1: more filesystems detected. This should not happen,
   use -t type to explicitly specify the filesystem type or
   use wipefs(8) to clean up the device.

mount: you must specify the filesystem type
ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', 
'--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.EGTIq2']' returned non-zero exit 
status 32

If executed this command separately for both disks - looks like ok:

For sdaa:

root@ceph001:~# ceph-disk -v prepare /dev/sdaa
INFO:ceph-disk:Will colocate journal with data on /dev/sdaa
DEBUG:ceph-disk:Creating journal partition num 2 size 1024 on /dev/sdaa
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Journal is GPT partition 
/dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f
DEBUG:ceph-disk:Creating osd partition on /dev/sdaa
Information: Moved requested sector from 2097153 to 2099200 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1
meta-data=/dev/sdaa1 isize=2048   agcount=32, agsize=22884508 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=732304241, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=357570, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.K3q9v5 with 
options noatime
DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.K3q9v5
DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.K3q9v5/journal - 
/dev/disk/by-partuuid/d1389210-6e02-4460-9cb2-0e31e4b0924f
DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.K3q9v5
The operation has completed successfully.
DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdaa

For sda1:

root@ceph001:~# ceph-disk -v prepare /dev/sda1
DEBUG:ceph-disk:OSD data device /dev/sda1 is a partition
DEBUG:ceph-disk:Creating xfs fs on /dev/sda1
meta-data=/dev/sda1  isize=2048   agcount=4, agsize=655360 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=2621440, imaxpct=25
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=2560, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.G30zPD with options 
noatime
DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.G30zPD
DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.G30zPD
DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sda1

From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Tuesday, August 13, 2013 11:14 PM
To: Pavel Timoschenkov
Cc: Samuel Just; ceph-us...@ceph.com
Subject: Re: [ceph-users] ceph-deploy and journal on separate disk



On Tue, Aug 13, 2013 at 3:21 AM, Pavel Timoschenkov 
pa

Re: [ceph-users] ceph-deploy and journal on separate disk

2013-08-13 Thread Pavel Timoschenkov
Hi.
Yes, i'm zapped all disks before.

More about my situation:
sdaa - one of disk for data: 3 TB with GPT partition table.
sda - ssd drive with manual created partitions (10 GB) for journal with MBR 
partition table.
===
fdisk -l /dev/sda

Disk /dev/sda: 480.1 GB, 480103981056 bytes
255 heads, 63 sectors/track, 58369 cylinders, total 937703088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00033624

   Device Boot  Start End  Blocks   Id  System
/dev/sda1204819531775 9764864   83  Linux
/dev/sda21953177639061503 9764864   83  Linux
/dev/sda33906150458593279 9765888   83  Linux
/dev/sda47812505697656831 9765888   83  Linux

===

If i'm executed ceph-deploy osd prepare without journal options - it's ok:


ceph@ceph-admin:~$ ceph-deploy disk zap ceph001:sdaa ceph001:sda1
[ceph_deploy.osd][DEBUG ] zapping /dev/sdaa on ceph001
[ceph_deploy.osd][DEBUG ] zapping /dev/sda1 on ceph001

ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph001:/dev/sdaa:
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph001
[ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaa journal None 
activate False

root@ceph001:~# gdisk -l /dev/sdaa
GPT fdisk (gdisk) version 0.8.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdaa: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 575ACF17-756D-47EC-828B-2E0A0B8ED757
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 2048-sector boundaries
Total free space is 4061 sectors (2.0 MiB)

Number  Start (sector)End (sector)  Size   Code  Name
   1 2099200  5860533134   2.7 TiB   ceph data
   22048 2097152   1023.0 MiB    ceph journal

Problems start, when i'm try create journal on separate drive:

ceph@ceph-admin:~$ ceph-deploy disk zap ceph001:sdaa ceph001:sda1
[ceph_deploy.osd][DEBUG ] zapping /dev/sdaa on ceph001
[ceph_deploy.osd][DEBUG ] zapping /dev/sda1 on ceph001

ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa:sda1
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
ceph001:/dev/sdaa:/dev/sda1
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph001
[ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaa journal 
/dev/sda1 activate False
[ceph_deploy.osd][ERROR ] ceph-disk-prepare -- /dev/sdaa /dev/sda1 returned 1
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
meta-data=/dev/sdaa1 isize=2048   agcount=32, agsize=22892700 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=732566385, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=357698, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0

WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same 
device as the osd data
mount: /dev/sdaa1: more filesystems detected. This should not happen,
   use -t type to explicitly specify the filesystem type or
   use wipefs(8) to clean up the device.

mount: you must specify the filesystem type
ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', 
'--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.fZQxiz']' returned non-zero exit 
status 32

ceph-deploy: Failed to create 1 OSDs

-Original Message-
From: Samuel Just [mailto:sam.j...@inktank.com] 
Sent: Monday, August 12, 2013 11:39 PM
To: Pavel Timoschenkov
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] ceph-deploy and journal on separate disk

Did you try using ceph-deploy disk zap ceph001:sdaa first?
-Sam

On Mon, Aug 12, 2013 at 6:21 AM, Pavel Timoschenkov 
pa...@bayonetteas.onmicrosoft.com wrote:
 Hi.

 I have some problems with create journal on separate disk, using 
 ceph-deploy osd prepare command.

 When I try execute next command:

 ceph-deploy osd prepare ceph001:sdaa:sda1

 where:

 sdaa - disk for ceph data

 sda1 - partition on ssd drive for journal

 I get next errors:

 ==
 ==

 ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa:sda1

 ceph-disk

[ceph-users] ceph-deploy and journal on separate disk

2013-08-12 Thread Pavel Timoschenkov
Hi.
I have some problems with create journal on separate disk, using ceph-deploy 
osd prepare command.
When I try execute next command:
ceph-deploy osd prepare ceph001:sdaa:sda1
where:
sdaa - disk for ceph data
sda1 - partition on ssd drive for journal
I get next errors:

ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa:sda1
ceph-disk-prepare -- /dev/sdaa /dev/sda1 returned 1
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
The operation has completed successfully.
meta-data=/dev/sdaa1 isize=2048   agcount=32, agsize=22892700 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=732566385, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=357698, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0

WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same 
device as the osd data
mount: /dev/sdaa1: more filesystems detected. This should not happen,
   use -t type to explicitly specify the filesystem type or
   use wipefs(8) to clean up the device.

mount: you must specify the filesystem type
ceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', 
'--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.ek6mog']' returned non-zero exit 
status 32

Someone had a similar problem?
Thanks for the help
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com