I went ahead and removed the assert and conditionalized the future use of
the obc variable on its being non-null. And linked that into a custom
ceph-osd binary for use on the most problematic node (8). That got the osd
up and running again! I took the opportunity to use the standard "remove
an
Hi,
I'm curious if using s3 like a cache - frequent put/delete in the long
term may cause some problems in radosgw or OSD(xfs)?
-
Regards
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
Sadly, the update to 0.94.6 did not solve the issue. I still can't get one
of my OSD to run at all. I have included the crash report below.
It looks like the following assert fails:
https://github.com/ceph/ceph/blob/v0.94.6/src/osd/ReplicatedPG.cc line
10495
ObjectContextRef obc =
On Sat, Apr 23, 2016 at 6:22 AM, Richard Chan
wrote:
> Hi Cephers,
>
> I upgraded to Jewel and noted the is massive radosgw multisite rework
> in the release notes.
>
> Can Jewel radosgw be configured to present existing Hammer buckets?
> On a test system, jewel
Hi Cephers,
I upgraded to Jewel and noted the is massive radosgw multisite rework
in the release notes.
Can Jewel radosgw be configured to present existing Hammer buckets?
On a test system, jewel didn't recognise my Hammer buckets;
Hammer used pools .rgw.*
Jewel created by default: .rgw.root
I've just looked through github for the Linux kernel and it looks like that
read ahead fix was introduced in 4.4, so I'm not sure if it's worth trying a
slightly newer kernel?
Sent from Nine
From: Mike Miller
Sent: 21 Apr 2016 2:20 pm
To: ceph-users@lists.ceph.com