Re: [ceph-users] different omap format in one cluster (.sst + .ldb) - new installed OSD-node don't start any OSD

2015-07-23 Thread Udo Lembke
Hi,
I use ceph 0.94 from wheezy repro (deb http://eu.ceph.com/debian-hammer wheezy 
main) inside jessie.
0.94.1 are installable without trouble, but an upgrade to 0.94.2 don't work 
correctly:
dpkg -l | grep ceph
ii  ceph   0.94.1-1~bpo70+1  amd64  
  distributed storage and file system
ii  ceph-common0.94.2-1~bpo70+1  amd64  
  common utilities to mount and interact
with a ceph storage cluster
ii  ceph-fs-common 0.94.2-1~bpo70+1  amd64  
  common utilities to mount and interact
with a ceph file system
ii  ceph-fuse  0.94.2-1~bpo70+1  amd64  
  FUSE-based client for the Ceph
distributed file system
ii  ceph-mds   0.94.2-1~bpo70+1  amd64  
  metadata server for the ceph
distributed file system
ii  libcephfs1 0.94.2-1~bpo70+1  amd64  
  Ceph distributed file system client
library
ii  python-cephfs  0.94.2-1~bpo70+1  amd64  
  Python libraries for the Ceph
libcephfs library

This is the reason, why I switched back to wheezy (and clean 0.94.2) but than 
all OSDs on that node failed to start.
Switching back to the jessie-system-disk don't solve this ploblem, because only 
3 OSDs started again...


My conclusion is, if now die one of my (partly brocken) jessie osd-node (like 
failed system ssd) I need less than an
hour for a new system (wheezy), around two ours to reinitilize all OSDs (format 
new, install ceph) and around two days
to refill the whole node.

Udo

Am 23.07.2015 13:21, schrieb Haomai Wang:
 Do you use upstream ceph version previously? Or do you shutdown
 running ceph-osd when upgrading osd?
 
 How many osds meet this problems?
 
 This assert failure means that osd detects a upgraded pg meta object
 but failed to read(or lack of 1 key) meta keys from object.
 
 On Thu, Jul 23, 2015 at 7:03 PM, Udo Lembke ulem...@polarzone.de wrote:
 Am 21.07.2015 12:06, schrieb Udo Lembke:
 Hi all,
 ...

 Normaly I would say, if one OSD-Node die, I simply reinstall the OS and 
 ceph and I'm back again... but this looks bad
 for me.
 Unfortunality the system also don't start 9 OSDs as I switched back to the 
 old system-disk... (only three of the big
 OSDs are running well)

 What is the best solution for that? Empty one node (crush weight 0), fresh 
 reinstall OS/ceph, reinitialise all OSDs?
 This will take a long long time, because we use 173TB in this cluster...



 Hi,
 answer myself if anybody has similiar issues and find the posting.

 Empty the whole nodes takes too long.
 I used the puppet wheezy system and have to recreate all OSDs (in this case 
 I need to empty the first blocks of the
 journal before create the OSD again).


 Udo
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] different omap format in one cluster (.sst + .ldb) - new installed OSD-node don't start any OSD

2015-07-23 Thread Haomai Wang
Do you use upstream ceph version previously? Or do you shutdown
running ceph-osd when upgrading osd?

How many osds meet this problems?

This assert failure means that osd detects a upgraded pg meta object
but failed to read(or lack of 1 key) meta keys from object.

On Thu, Jul 23, 2015 at 7:03 PM, Udo Lembke ulem...@polarzone.de wrote:
 Am 21.07.2015 12:06, schrieb Udo Lembke:
 Hi all,
 ...

 Normaly I would say, if one OSD-Node die, I simply reinstall the OS and ceph 
 and I'm back again... but this looks bad
 for me.
 Unfortunality the system also don't start 9 OSDs as I switched back to the 
 old system-disk... (only three of the big
 OSDs are running well)

 What is the best solution for that? Empty one node (crush weight 0), fresh 
 reinstall OS/ceph, reinitialise all OSDs?
 This will take a long long time, because we use 173TB in this cluster...



 Hi,
 answer myself if anybody has similiar issues and find the posting.

 Empty the whole nodes takes too long.
 I used the puppet wheezy system and have to recreate all OSDs (in this case I 
 need to empty the first blocks of the
 journal before create the OSD again).


 Udo
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Best Regards,

Wheat
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] different omap format in one cluster (.sst + .ldb) - new installed OSD-node don't start any OSD

2015-07-23 Thread Udo Lembke
Am 21.07.2015 12:06, schrieb Udo Lembke:
 Hi all,
 ...
 
 Normaly I would say, if one OSD-Node die, I simply reinstall the OS and ceph 
 and I'm back again... but this looks bad
 for me.
 Unfortunality the system also don't start 9 OSDs as I switched back to the 
 old system-disk... (only three of the big
 OSDs are running well)
 
 What is the best solution for that? Empty one node (crush weight 0), fresh 
 reinstall OS/ceph, reinitialise all OSDs?
 This will take a long long time, because we use 173TB in this cluster...
 
 

Hi,
answer myself if anybody has similiar issues and find the posting.

Empty the whole nodes takes too long.
I used the puppet wheezy system and have to recreate all OSDs (in this case I 
need to empty the first blocks of the
journal before create the OSD again).


Udo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] different omap format in one cluster (.sst + .ldb) - new installed OSD-node don't start any OSD

2015-07-21 Thread Udo Lembke
Hi all,
we had an ceph cluster with 7 OSD-nodes (Debian Jessie (because patched 
tcmalloc) with ceph 0.94) which we expand with
one further node.
For this node we use puppet with Debian 7.8, because ceph 0.92.2 doesn't 
install on Jessie (upgrade 0.94.1 work on the
other nodes but 0.94.2 looks not clean because the package ceph are still 
0.94.1).
The ceph.conf is systemwide the same and the OSDs are on all nodes initialized 
with ceph-deploy (only some exceptions).
All OSDs are used ext4, switched from xfs during the cluster run ceph 0.80.7, 
filestore xattr use omap = true are used
in ceph.conf.

I'm wondering that the omap-format is different on the nodes.
The new wheezy node use .sst files:
ls -lsa /var/lib/ceph/osd/ceph-92/current/omap/
...
2084 -rw-r--r--   1 root root 2131113 Jul 20 17:45 98.sst
2084 -rw-r--r--   1 root root 2131913 Jul 20 17:45 99.sst
2084 -rw-r--r--   1 root root 2130623 Jul 20 17:45 000111.sst
...

Due the jessie nodes use levelDB:
ls -lsa /var/lib/ceph/osd/ceph-1/current/omap/
...

2084 -rw-r--r--   1 root root 2130468 Jul 20 22:33 80.ldb
2084 -rw-r--r--   1 root root 2130827 Jul 20 22:33 81.ldb
2084 -rw-r--r--   1 root root 2130171 Jul 20 22:33 88.ldb
...

On some OSDs I found old .sst files which came out of wheezy/ceph 0.87 times:
ls -lsa /var/lib/ceph/osd/ceph-23/current/omap/*.sst
2096 -rw-r--r-- 1 root root 2142558 Apr  3 15:59 
/var/lib/ceph/osd/ceph-23/current/omap/016722.sst
2092 -rw-r--r-- 1 root root 2141968 Apr  3 15:59 
/var/lib/ceph/osd/ceph-23/current/omap/016723.sst
2092 -rw-r--r-- 1 root root 2141679 Apr  3 15:59 
/var/lib/ceph/osd/ceph-23/current/omap/016724.sst
2096 -rw-r--r-- 1 root root 2142376 Apr  3 15:59 
/var/lib/ceph/osd/ceph-23/current/omap/016725.sst
2096 -rw-r--r-- 1 root root 2142227 Apr  3 15:59 
/var/lib/ceph/osd/ceph-23/current/omap/016726.sst
2092 -rw-r--r-- 1 root root 2141369 Apr 20 21:23 
/var/lib/ceph/osd/ceph-23/current/omap/019470.sst
But much more .ldb-files
ls -lsa /var/lib/ceph/osd/ceph-23/current/omap/*.ldb | wc -l
128

The config shows for OSDs on both nodes (old and new with .sst-files) as 
backend leveldb:
ceph --admin-daemon /var/run/ceph/ceph-osd.92.asok config show | grep -i omap
filestore_omap_backend: leveldb,
filestore_debug_omap_check: false,
filestore_omap_header_cache_size: 1024,


Normaly I would not care about that, but I tried to switch the first OSD-Node 
to an clean puppet install and see, that
none OSD are started. The error message looks a little bit like 
http://tracker.ceph.com/issues/11429 but this should not
happens, because the puppet install has ceph 0.94.2.

Error message during start:
cat ceph-osd.0.log
2015-07-20 16:51:29.435081 7fb47b126840  0 ceph version 0.94.2 
(5fb85614ca8f354284c713a2f9c610860720bbf3), process
ceph-osd, pid 9803
2015-07-20 16:51:29.457776 7fb47b126840  0 filestore(/var/lib/ceph/osd/ceph-0) 
backend generic (magic 0xef53)
2015-07-20 16:51:29.460470 7fb47b126840  0 
genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: FIEMAP
ioctl is supported and appears to work
2015-07-20 16:51:29.460479 7fb47b126840  0 
genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: FIEMAP
ioctl is disabled via 'filestore fiemap' config option
2015-07-20 16:51:29.485120 7fb47b126840  0 
genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features:
syscall(SYS_syncfs, fd) fully supported
2015-07-20 16:51:29.572670 7fb47b126840  0 filestore(/var/lib/ceph/osd/ceph-0) 
limited size xattrs
2015-07-20 16:51:29.889599 7fb47b126840  0 filestore(/var/lib/ceph/osd/ceph-0) 
mount: enabling WRITEAHEAD journal mode:
checkpoint is not enabled
2015-07-20 16:51:31.517179 7fb47b126840  0 cls cls/hello/cls_hello.cc:271: 
loading cls_hello
2015-07-20 16:51:31.552366 7fb47b126840  0 osd.0 151644 crush map has features 
2303210029056, adjusting msgr requires
for clients
2015-07-20 16:51:31.552375 7fb47b126840  0 osd.0 151644 crush map has features 
2578087936000 was 8705, adjusting msgr
requires for mons
2015-07-20 16:51:31.552382 7fb47b126840  0 osd.0 151644 crush map has features 
2578087936000, adjusting msgr requires
for osds
2015-07-20 16:51:31.552394 7fb47b126840  0 osd.0 151644 load_pgs
2015-07-20 16:51:42.682678 7fb47b126840 -1 osd/PG.cc: In function 'static 
epoch_t PG::peek_map_epoch(ObjectStore*,
spg_t, ceph::bufferlist*)' thread 7fb47b126840 time 2015-07-20 16:51:42.680036
osd/PG.cc: 2825: FAILED assert(values.size() == 2)

 ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x72) 
[0xcdb572]
 2: (PG::peek_map_epoch(ObjectStore*, spg_t, ceph::buffer::list*)+0x7b2) 
[0x908742]
 3: (OSD::load_pgs()+0x734) [0x7e9064]
 4: (OSD::init()+0xdac) [0x7ed8fc]
 5: (main()+0x253e) [0x79069e]
 6: (__libc_start_main()+0xfd) [0x7fb47898fead]
 7: /usr/bin/ceph-osd() [0x7966b9]
 NOTE: a copy of the executable, or `objdump -rdS executable` is needed to 
interpret this.
...

Normaly I would