Re: [ceph-users] Multiple OSD crashing a lot

2016-04-23 Thread Blade Doyle
I went ahead and removed the assert and conditionalized the future use of the obc variable on its being non-null. And linked that into a custom ceph-osd binary for use on the most problematic node (8). That got the osd up and running again! I took the opportunity to use the standard "remove an

[ceph-users] Using s3 (radosgw + ceph) like a cache

2016-04-23 Thread Dominik Mostowiec
Hi, I'm curious if using s3 like a cache - frequent put/delete in the long term may cause some problems in radosgw or OSD(xfs)? - Regards Dominik ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Multiple OSD crashing a lot

2016-04-23 Thread Blade Doyle
Sadly, the update to 0.94.6 did not solve the issue. I still can't get one of my OSD to run at all. I have included the crash report below. It looks like the following assert fails: https://github.com/ceph/ceph/blob/v0.94.6/src/osd/ReplicatedPG.cc line 10495 ObjectContextRef obc =

Re: [ceph-users] Can Jewel read Hammer radosgw buckets?

2016-04-23 Thread Yehuda Sadeh-Weinraub
On Sat, Apr 23, 2016 at 6:22 AM, Richard Chan wrote: > Hi Cephers, > > I upgraded to Jewel and noted the is massive radosgw multisite rework > in the release notes. > > Can Jewel radosgw be configured to present existing Hammer buckets? > On a test system, jewel

[ceph-users] Can Jewel read Hammer radosgw buckets?

2016-04-23 Thread Richard Chan
Hi Cephers, I upgraded to Jewel and noted the is massive radosgw multisite rework in the release notes. Can Jewel radosgw be configured to present existing Hammer buckets? On a test system, jewel didn't recognise my Hammer buckets; Hammer used pools .rgw.* Jewel created by default: .rgw.root

Re: [ceph-users] Slow read on RBD mount, Hammer 0.94.5

2016-04-23 Thread nick
I've just looked through github for the Linux kernel and it looks like that read ahead fix was introduced in 4.4, so I'm not sure if it's worth trying a slightly newer kernel? Sent from Nine From: Mike Miller Sent: 21 Apr 2016 2:20 pm To: ceph-users@lists.ceph.com