That was a poor example, because it was an older version of ceph and the
clock was not set correctly.   But I don't think either of those things
causes the problem because I see it on multiple nodes:

root@node8:/var/log/ceph# grep hit_set_trim ceph-osd.2.log | wc -l
2524
root@node8:/var/log/ceph# ceph --version
ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
root@node8:/var/log/ceph# date
Wed 20 Apr 23:58:59 PDT 2016

I saw this:  http://tracker.ceph.com/issues/9732 so I set all the
timezones.  Didn't solve the problem.  I am building 0.94.6 anyway.

Thanks,
Blade.

On Wed, Apr 20, 2016 at 12:37 AM, Blade Doyle <blade.do...@gmail.com> wrote:

> I get a lot of osd crash with the following stack - suggestion please:
>
>      0> 1969-12-31 16:04:55.455688 83ccf410 -1 osd/ReplicatedPG.cc: In
> function 'void ReplicatedPG::hit_set_trim(ReplicatedPG::RepGather*,
> unsigned int)' thread 83ccf410 time 295.324905
> osd/ReplicatedPG.cc: 11011: FAILED assert(obc)
>
>  ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
>  1: (ReplicatedPG::hit_set_trim(ReplicatedPG::RepGather*, unsigned
> int)+0x3f9) [0xb6c625e6]
>  2: (ReplicatedPG::hit_set_persist()+0x8bf) [0xb6c62fb4]
>  3: (ReplicatedPG::do_op(std::tr1::shared_ptr<OpRequest>)+0xc97)
> [0xb6c6eb2c]
>  4: (ReplicatedPG::do_request(std::tr1::shared_ptr<OpRequest>,
> ThreadPool::TPHandle&)+0x439) [0xb6c2f01a]
>  5: (OSD::dequeue_op(boost::intrusive_ptr<PG>,
> std::tr1::shared_ptr<OpRequest>, ThreadPool::TPHandle&)+0x22b) [0xb6b0b984]
>  6: (OSD::OpWQ::_process(boost::intrusive_ptr<PG>,
> ThreadPool::TPHandle&)+0x13d) [0xb6b1ccf6]
>  7: (ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>,
> std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG>
> >::_void_process(void*, ThreadPool::TPHandle&)+0x6b) [0xb6b4692c]
>  8: (ThreadPool::worker(ThreadPool::WorkThread*)+0xb93) [0xb6e152bc]
>  9: (ThreadPool::WorkThread::entry()+0x9) [0xb6e15aea]
>  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
> to interpret this.
>
>
> Blade.
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to