Re: [ceph-users] Updating OSD Parameters

2015-07-28 Thread Noah Mehl
Wido,

That’s awesome, I will look at this right now.

Thanks!

~Noah

> On Jul 28, 2015, at 11:02 AM, Wido den Hollander  wrote:
> 
> 
> 
> On 28-07-15 16:53, Noah Mehl wrote:
>> When we update the following in ceph.conf:
>> 
>> [osd]
>>  osd_recovery_max_active = 1
>>  osd_max_backfills = 1
>> 
>> How do we make sure it takes affect?  Do we have to restart all of the
>> ceph osd’s and mon’s?
> 
> On a client with client.admin keyring you execute:
> 
> ceph tell osd.* injectargs '--osd_recovery_max_active=1'
> 
> It will take effect immediately. Keep in mind though that PGs which are
> currently recovering are not affected.
> 
> So if a OSD is currently doing 10 backfills, it will keep doing that. It
> however won't accept any new backfills. So it slowly goes down to 9, 8,
> 7, etc, until you see only 1 backfill active.
> 
> Same goes for recovery.
> 
> Wido
> 
>> 
>> Thanks!
>> 
>> ~Noah
>> 
>> 
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Updating OSD Parameters

2015-07-28 Thread Noah Mehl
When we update the following in ceph.conf:

[osd]
  osd_recovery_max_active = 1
  osd_max_backfills = 1

How do we make sure it takes affect?  Do we have to restart all of the ceph 
osd’s and mon’s?

Thanks!

~Noah

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD + Flashcache + udev + Partition uuid

2015-03-23 Thread Noah Mehl
A. I see now.  Has anyone used 
cachecade<http://www.lsi.com/products/raid-controllers/pages/megaraid-cachecade-pro-software.aspx>
 from LSI for both the read and write cache to SSD?  I don’t know if you can 
attach a cachecade device to a JBOD, but if you could it would probably perform 
really well….

I submit this because I really haven’t seen an opensouce read and write SSD 
cache that performs as well as ZFS for instance.  And for ZFS, I don’t know if 
you can add a SSD cache to a single drive?

Thanks!

~Noah

On Mar 23, 2015, at 5:43 PM, Nick Fisk 
mailto:n...@fisk.me.uk>> wrote:

Just to add, the main reason it seems to make a difference is the metadata
updates which lie on the actual OSD. When you are doing small block writes,
these metadata updates seem to take almost as long as the actual data, so
although the writes are getting coalesced, the actual performance isn't much
better.

I did a blktrace a week ago, writing 500MB in 64k blocks to an OSD. You
could see that the actual data was flushed to the OSD in a couple of
seconds, another 30 seconds was spent writing out metadata and doing
EXT4/XFS journal writes.

Normally I have found flashcache to perform really poorly as it does
everything in 4kb blocks, meaning that when you start throwing larger blocks
at it, it can actually slow things down. However for the purpose of OSD's
you can set the IO cutoff size limit to around 16-32kb and then it should
only cache the metadata updates.

I'm hoping to do some benchmarks before and after flashcache on a SSD
Journaled OSD this week, so will post results when I have them.

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Brendan Moloney
Sent: 23 March 2015 21:02
To: Noah Mehl
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] OSD + Flashcache + udev + Partition uuid

This would be in addition to having the journal on SSD.  The journal
doesn't
help at all with small random reads and has a fairly limited ability to
coalesce
writes.

In my case, the SSDs we are using for journals should have plenty of
bandwidth/IOPs/space to spare, so I want to see if I can get a little more
out
of them.

-Brendan

____
From: Noah Mehl 
[noahm...@combinedpublic.com<mailto:noahm...@combinedpublic.com>]
Sent: Monday, March 23, 2015 1:45 PM
To: Brendan Moloney
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] OSD + Flashcache + udev + Partition uuid

We deployed with just putting the journal on an SSD directly, why would
this
not work for you?  Just wondering really :)

Thanks!

~Noah

On Mar 23, 2015, at 4:36 PM, Brendan Moloney 
mailto:molo...@ohsu.edu>>
wrote:

I have been looking at the options for SSD caching for a bit now. Here
is my
take on the current options:

1) bcache - Seems to have lots of reliability issues mentioned on
mailing list
with little sign of improvement.

2) flashcache - Seems to be no longer (or minimally?)
developed/maintained, instead folks are working on the fork enhanceio.

3) enhanceio - Fork of flashcache.  Dropped the ability to skip caching
on
sequential writes, which many folks have claimed is important for Ceph OSD
caching performance. (see: https://github.com/stec-
inc/EnhanceIO/issues/32)

4) LVM cache (dm-cache) - There is now a user friendly way to use dm-
cache, through LVM.  Allows sequential writes to be skipped. You need a
pretty recent kernel.

I am going to be trying out LVM cache on my own cluster in the next few
weeks.  I will share my results here on the mailing list.  If anyone else
has
tried it out I would love to hear about it.

-Brendan

In a long term use I also had some issues with flashcache and
enhanceio.
I've noticed frequent slow requests.

Andrei
___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD + Flashcache + udev + Partition uuid

2015-03-23 Thread Noah Mehl
We deployed with just putting the journal on an SSD directly, why would this 
not work for you?  Just wondering really :)

Thanks!

~Noah

> On Mar 23, 2015, at 4:36 PM, Brendan Moloney  wrote:
> 
> I have been looking at the options for SSD caching for a bit now. Here is my 
> take on the current options:
> 
> 1) bcache - Seems to have lots of reliability issues mentioned on mailing 
> list with little sign of improvement.
> 
> 2) flashcache - Seems to be no longer (or minimally?) developed/maintained, 
> instead folks are working on the fork enhanceio.
> 
> 3) enhanceio - Fork of flashcache.  Dropped the ability to skip caching on 
> sequential writes, which many folks have claimed is important for Ceph OSD 
> caching performance. (see: https://github.com/stec-inc/EnhanceIO/issues/32)
> 
> 4) LVM cache (dm-cache) - There is now a user friendly way to use dm-cache, 
> through LVM.  Allows sequential writes to be skipped. You need a pretty 
> recent kernel.
> 
> I am going to be trying out LVM cache on my own cluster in the next few 
> weeks.  I will share my results here on the mailing list.  If anyone else has 
> tried it out I would love to hear about it.
> 
> -Brendan
> 
>> In a long term use I also had some issues with flashcache and enhanceio. 
>> I've noticed frequent slow requests.
>> 
>> Andrei
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Can't Start OSD

2015-03-22 Thread Noah Mehl
Somnath,

You are correct, there are dmesg errors about the drive.  How can I replace the 
drive?  Can I copy all of the readable contents from this drive to a new one?  
Because I have the following output from “ceph health detail”

HEALTH_WARN 43 pgs stale; 43 pgs stuck stale
pg 7.5b7 is stuck stale for 5954121.993990, current state stale+active+clean, 
last acting [1]
pg 7.42a is stuck stale for 5954121.993885, current state stale+active+clean, 
last acting [1]
pg 7.669 is stuck stale for 5954121.994072, current state stale+active+clean, 
last acting [1]
pg 7.121 is stuck stale for 5954121.993586, current state stale+active+clean, 
last acting [1]
pg 7.4ec is stuck stale for 5954121.993956, current state stale+active+clean, 
last acting [1]
pg 7.1e4 is stuck stale for 5954121.993670, current state stale+active+clean, 
last acting [1]
pg 7.41f is stuck stale for 5954121.993901, current state stale+active+clean, 
last acting [1]
pg 7.59f is stuck stale for 5954121.994024, current state stale+active+clean, 
last acting [1]
pg 7.39 is stuck stale for 5954121.993490, current state stale+active+clean, 
last acting [1]
pg 7.584 is stuck stale for 5954121.994026, current state stale+active+clean, 
last acting [1]
pg 7.fd is stuck stale for 5954121.993600, current state stale+active+clean, 
last acting [1]
pg 7.6fd is stuck stale for 5954121.994158, current state stale+active+clean, 
last acting [1]
pg 7.4b5 is stuck stale for 5954121.993975, current state stale+active+clean, 
last acting [1]
pg 7.328 is stuck stale for 5954121.993840, current state stale+active+clean, 
last acting [1]
pg 7.4a9 is stuck stale for 5954121.993981, current state stale+active+clean, 
last acting [1]
pg 7.569 is stuck stale for 5954121.994046, current state stale+active+clean, 
last acting [1]
pg 7.629 is stuck stale for 5954121.994119, current state stale+active+clean, 
last acting [1]
pg 7.623 is stuck stale for 5954121.994118, current state stale+active+clean, 
last acting [1]
pg 7.6dd is stuck stale for 5954121.994179, current state stale+active+clean, 
last acting [1]
pg 7.3d5 is stuck stale for 5954121.993935, current state stale+active+clean, 
last acting [1]
pg 7.54b is stuck stale for 5954121.994058, current state stale+active+clean, 
last acting [1]
pg 7.3cf is stuck stale for 5954121.993938, current state stale+active+clean, 
last acting [1]
pg 7.c4 is stuck stale for 5954121.993633, current state stale+active+clean, 
last acting [1]
pg 7.178 is stuck stale for 5954121.993719, current state stale+active+clean, 
last acting [1]
pg 7.3b8 is stuck stale for 5954121.993946, current state stale+active+clean, 
last acting [1]
pg 7.b1 is stuck stale for 5954121.993635, current state stale+active+clean, 
last acting [1]
pg 7.5fb is stuck stale for 5954121.994146, current state stale+active+clean, 
last acting [1]
pg 7.236 is stuck stale for 5954121.993801, current state stale+active+clean, 
last acting [1]
pg 7.2f5 is stuck stale for 5954121.993881, current state stale+active+clean, 
last acting [1]
pg 7.ac is stuck stale for 5954121.993643, current state stale+active+clean, 
last acting [1]
pg 7.16d is stuck stale for 5954121.993738, current state stale+active+clean, 
last acting [1]
pg 7.6b7 is stuck stale for 5954121.994223, current state stale+active+clean, 
last acting [1]
pg 7.5ea is stuck stale for 5954121.994166, current state stale+active+clean, 
last acting [1]
pg 7.a3 is stuck stale for 5954121.993654, current state stale+active+clean, 
last acting [1]
pg 7.52d is stuck stale for 5954121.994110, current state stale+active+clean, 
last acting [1]
pg 7.2d8 is stuck stale for 5954121.993904, current state stale+active+clean, 
last acting [1]
pg 7.2db is stuck stale for 5954121.993903, current state stale+active+clean, 
last acting [1]
pg 7.5d9 is stuck stale for 5954121.994181, current state stale+active+clean, 
last acting [1]
pg 7.395 is stuck stale for 5954121.993989, current state stale+active+clean, 
last acting [1]
pg 7.38e is stuck stale for 5954121.993988, current state stale+active+clean, 
last acting [1]
pg 7.13a is stuck stale for 5954121.993766, current state stale+active+clean, 
last acting [1]
pg 7.683 is stuck stale for 5954121.994255, current state stale+active+clean, 
last acting [1]
pg 7.439 is stuck stale for 5954121.994079, current state stale+active+clean, 
last acting [1]

It’s osd id=1 that’s problematic, but I should have a replica of the data 
somewhere else?

Thanks!

~Noah

> On Mar 22, 2015, at 2:04 PM, Somnath Roy  wrote:
> 
> Are you seeing any error related to the disk (where OSD is mounted) in dmesg ?
> Could be a leveldb corruption or ceph bug.
> Now, unfortunately not enough log in that portion of the code base to reveal 
> exactly why we are not getting infoos object from leveldb :-(
> 
> Thanks & Regards
> Somnath
> 
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Noah 
> Mehl
> Se

Re: [ceph-users] Can't Start OSD

2015-03-22 Thread Noah Mehl
In production for over a year, and no upgrades.

Thanks!

~Noah

> On Mar 22, 2015, at 1:01 PM, Somnath Roy  wrote:
> 
> Noah,
> Is this fresh installation or after upgrade ?
> 
> It seems related to omap (leveldb) stuff.
> 
> Thanks & Regards
> Somnath
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Noah 
> Mehl
> Sent: Sunday, March 22, 2015 9:34 AM
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] Can't Start OSD
> 
> I have an OSD that’s failing to start.  I can’t make heads or tails of the 
> error (pasted below).
> 
> Thanks!
> 
> ~Noah
> 
> 2015-03-22 16:32:39.265116 7f4da7fa0780  0 ceph version 0.67.4 
> (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7), process ceph-osd, pid 13483
> 2015-03-22 16:32:39.269499 7f4da7fa0780  1 
> filestore(/var/lib/ceph/osd/ceph-1) mount detected xfs
> 2015-03-22 16:32:39.269509 7f4da7fa0780  1 
> filestore(/var/lib/ceph/osd/ceph-1)  disabling 'filestore replica fadvise' 
> due to known issues with fadvise(DONTNEED) on xfs
> 2015-03-22 16:32:39.450031 7f4da7fa0780  0 
> filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is supported and 
> appears to work
> 2015-03-22 16:32:39.450069 7f4da7fa0780  0 
> filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is disabled via 
> 'filestore fiemap' config option
> 2015-03-22 16:32:39.450743 7f4da7fa0780  0 
> filestore(/var/lib/ceph/osd/ceph-1) mount did NOT detect btrfs
> 2015-03-22 16:32:39.499753 7f4da7fa0780  0 
> filestore(/var/lib/ceph/osd/ceph-1) mount syncfs(2) syscall fully supported 
> (by glibc and kernel)
> 2015-03-22 16:32:39.500078 7f4da7fa0780  0 
> filestore(/var/lib/ceph/osd/ceph-1) mount found snaps <>
> 2015-03-22 16:32:40.765736 7f4da7fa0780  0 
> filestore(/var/lib/ceph/osd/ceph-1) mount: enabling WRITEAHEAD journal mode: 
> btrfs not detected
> 2015-03-22 16:32:40.777156 7f4da7fa0780  1 journal _open 
> /var/lib/ceph/osd/ceph-1/journal fd 2551: 5368709120 bytes, block size 4096 
> bytes, directio = 1, aio = 1
> 2015-03-22 16:32:40.777278 7f4da7fa0780  1 journal _open 
> /var/lib/ceph/osd/ceph-1/journal fd 2551: 5368709120 bytes, block size 4096 
> bytes, directio = 1, aio = 1
> 2015-03-22 16:32:40.778223 7f4da7fa0780  1 journal close 
> /var/lib/ceph/osd/ceph-1/journal
> 2015-03-22 16:32:41.066655 7f4da7fa0780  1 
> filestore(/var/lib/ceph/osd/ceph-1) mount detected xfs
> 2015-03-22 16:32:41.150578 7f4da7fa0780  0 
> filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is supported and 
> appears to work
> 2015-03-22 16:32:41.150624 7f4da7fa0780  0 
> filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is disabled via 
> 'filestore fiemap' config option
> 2015-03-22 16:32:41.151359 7f4da7fa0780  0 
> filestore(/var/lib/ceph/osd/ceph-1) mount did NOT detect btrfs
> 2015-03-22 16:32:41.225302 7f4da7fa0780  0 
> filestore(/var/lib/ceph/osd/ceph-1) mount syncfs(2) syscall fully supported 
> (by glibc and kernel)
> 2015-03-22 16:32:41.225498 7f4da7fa0780  0 
> filestore(/var/lib/ceph/osd/ceph-1) mount found snaps <>
> 2015-03-22 16:32:42.375558 7f4da7fa0780  0 
> filestore(/var/lib/ceph/osd/ceph-1) mount: enabling WRITEAHEAD journal mode: 
> btrfs not detected
> 2015-03-22 16:32:42.382958 7f4da7fa0780  1 journal _open 
> /var/lib/ceph/osd/ceph-1/journal fd 1429: 5368709120 bytes, block size 4096 
> bytes, directio = 1, aio = 1
> 2015-03-22 16:32:42.383187 7f4da7fa0780  1 journal _open 
> /var/lib/ceph/osd/ceph-1/journal fd 1481: 5368709120 bytes, block size 4096 
> bytes, directio = 1, aio = 1
> 2015-03-22 16:32:43.076434 7f4da7fa0780 -1 osd/PG.cc: In function 'static 
> epoch_t PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&, 
> ceph::bufferlist*)' thread 7f4da7fa0780 time 2015-03-22 16:32:43.075101
> osd/PG.cc: 2270: FAILED assert(values.size() == 1)
> 
> ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
> 1: (PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&, 
> ceph::buffer::list*)+0x4d7) [0x70ebf7]
> 2: (OSD::load_pgs()+0x14ce) [0x694efe]
> 3: (OSD::init()+0x11be) [0x69cffe]
> 4: (main()+0x1d09) [0x5c3509]
> 5: (__libc_start_main()+0xed) [0x7f4da5bde76d]
> 6: /usr/bin/ceph-osd() [0x5c6e1d]
> NOTE: a copy of the executable, or `objdump -rdS ` is needed to 
> interpret this.
> 
> --- begin dump of recent events ---
>   -75> 2015-03-22 16:32:39.259280 7f4da7fa0780  5 asok(0x1aec1c0) 
> register_command perfcounters_dump hook 0x1ae4010
>   -74> 2015-03-22 16:32:39.259373 7f4da7fa0780  5 asok(0x1aec1c0) 
> register_command 1 hook 0x1ae4010
>   -73> 2015-03-22 16:32:39.259393 7f4da7fa0780  5 asok(0x1aec1c0) 
> register_command perf dump hook 0x1ae40

[ceph-users] Can't Start OSD

2015-03-22 Thread Noah Mehl
I have an OSD that’s failing to start.  I can’t make heads or tails of the 
error (pasted below).

Thanks!

~Noah

2015-03-22 16:32:39.265116 7f4da7fa0780  0 ceph version 0.67.4 
(ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7), process ceph-osd, pid 13483
2015-03-22 16:32:39.269499 7f4da7fa0780  1 filestore(/var/lib/ceph/osd/ceph-1) 
mount detected xfs
2015-03-22 16:32:39.269509 7f4da7fa0780  1 filestore(/var/lib/ceph/osd/ceph-1)  
disabling 'filestore replica fadvise' due to known issues with 
fadvise(DONTNEED) on xfs
2015-03-22 16:32:39.450031 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) 
mount FIEMAP ioctl is supported and appears to work
2015-03-22 16:32:39.450069 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) 
mount FIEMAP ioctl is disabled via 'filestore fiemap' config option
2015-03-22 16:32:39.450743 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) 
mount did NOT detect btrfs
2015-03-22 16:32:39.499753 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) 
mount syncfs(2) syscall fully supported (by glibc and kernel)
2015-03-22 16:32:39.500078 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) 
mount found snaps <>
2015-03-22 16:32:40.765736 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) 
mount: enabling WRITEAHEAD journal mode: btrfs not detected
2015-03-22 16:32:40.777156 7f4da7fa0780  1 journal _open 
/var/lib/ceph/osd/ceph-1/journal fd 2551: 5368709120 bytes, block size 4096 
bytes, directio = 1, aio = 1
2015-03-22 16:32:40.777278 7f4da7fa0780  1 journal _open 
/var/lib/ceph/osd/ceph-1/journal fd 2551: 5368709120 bytes, block size 4096 
bytes, directio = 1, aio = 1
2015-03-22 16:32:40.778223 7f4da7fa0780  1 journal close 
/var/lib/ceph/osd/ceph-1/journal
2015-03-22 16:32:41.066655 7f4da7fa0780  1 filestore(/var/lib/ceph/osd/ceph-1) 
mount detected xfs
2015-03-22 16:32:41.150578 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) 
mount FIEMAP ioctl is supported and appears to work
2015-03-22 16:32:41.150624 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) 
mount FIEMAP ioctl is disabled via 'filestore fiemap' config option
2015-03-22 16:32:41.151359 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) 
mount did NOT detect btrfs
2015-03-22 16:32:41.225302 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) 
mount syncfs(2) syscall fully supported (by glibc and kernel)
2015-03-22 16:32:41.225498 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) 
mount found snaps <>
2015-03-22 16:32:42.375558 7f4da7fa0780  0 filestore(/var/lib/ceph/osd/ceph-1) 
mount: enabling WRITEAHEAD journal mode: btrfs not detected
2015-03-22 16:32:42.382958 7f4da7fa0780  1 journal _open 
/var/lib/ceph/osd/ceph-1/journal fd 1429: 5368709120 bytes, block size 4096 
bytes, directio = 1, aio = 1
2015-03-22 16:32:42.383187 7f4da7fa0780  1 journal _open 
/var/lib/ceph/osd/ceph-1/journal fd 1481: 5368709120 bytes, block size 4096 
bytes, directio = 1, aio = 1
2015-03-22 16:32:43.076434 7f4da7fa0780 -1 osd/PG.cc: In function 'static 
epoch_t PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&, 
ceph::bufferlist*)' thread 7f4da7fa0780 time 2015-03-22 16:32:43.075101
osd/PG.cc: 2270: FAILED assert(values.size() == 1)

 ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
 1: (PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&, 
ceph::buffer::list*)+0x4d7) [0x70ebf7]
 2: (OSD::load_pgs()+0x14ce) [0x694efe]
 3: (OSD::init()+0x11be) [0x69cffe]
 4: (main()+0x1d09) [0x5c3509]
 5: (__libc_start_main()+0xed) [0x7f4da5bde76d]
 6: /usr/bin/ceph-osd() [0x5c6e1d]
 NOTE: a copy of the executable, or `objdump -rdS ` is needed to 
interpret this.

--- begin dump of recent events ---
   -75> 2015-03-22 16:32:39.259280 7f4da7fa0780  5 asok(0x1aec1c0) 
register_command perfcounters_dump hook 0x1ae4010
   -74> 2015-03-22 16:32:39.259373 7f4da7fa0780  5 asok(0x1aec1c0) 
register_command 1 hook 0x1ae4010
   -73> 2015-03-22 16:32:39.259393 7f4da7fa0780  5 asok(0x1aec1c0) 
register_command perf dump hook 0x1ae4010
   -72> 2015-03-22 16:32:39.259429 7f4da7fa0780  5 asok(0x1aec1c0) 
register_command perfcounters_schema hook 0x1ae4010
   -71> 2015-03-22 16:32:39.259445 7f4da7fa0780  5 asok(0x1aec1c0) 
register_command 2 hook 0x1ae4010
   -70> 2015-03-22 16:32:39.259453 7f4da7fa0780  5 asok(0x1aec1c0) 
register_command perf schema hook 0x1ae4010
   -69> 2015-03-22 16:32:39.259467 7f4da7fa0780  5 asok(0x1aec1c0) 
register_command config show hook 0x1ae4010
   -68> 2015-03-22 16:32:39.259481 7f4da7fa0780  5 asok(0x1aec1c0) 
register_command config set hook 0x1ae4010
   -67> 2015-03-22 16:32:39.259495 7f4da7fa0780  5 asok(0x1aec1c0) 
register_command config get hook 0x1ae4010
   -66> 2015-03-22 16:32:39.259505 7f4da7fa0780  5 asok(0x1aec1c0) 
register_command log flush hook 0x1ae4010
   -65> 2015-03-22 16:32:39.259519 7f4da7fa0780  5 asok(0x1aec1c0) 
register_command log dump hook 0x1ae4010
   -64> 2015-03-22 16:32:39.259536 7f4da7fa0780  5 asok(0x1aec1c0) 
register_command log reopen hook 0x1ae4010
   -63> 2015-03-22 16:32:39.26511