Hi,
today i got another osd crash ;-( Strangely the osd logs are all empty.
It seems the logrotate hasn't reloaded the daemons but i still have the
core dump file? What's next?
Stefan
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to
Hi,
i tried updating one of our osds from stable 0.47-2 to latest next
branch and it started updating the filestore and failed.
After that neither next branch osd nor stable osd would start with this
filestore anymore.
Is their something wrong with the filestore update?
Jun 16 14:10:03
and another crash again ;-(
0 2012-06-16 15:31:32.524369 7fd8935c4700 -1 ./common/Mutex.h: In
function 'void Mutex::Lock(bool)' thread 7fd8935c4700 time 2012-06-16
15:31:32.522446
./common/Mutex.h: 110: FAILED assert(r == 0)
ceph version (commit:)
1: /usr/bin/ceph-osd() [0x51a07d]
On Fri, 15 Jun 2012, Yehuda Sadeh wrote:
On Fri, Jun 15, 2012 at 5:46 PM, Sage Weil s...@inktank.com wrote:
Looks good! Couple small things:
$ rbd unpreserve pool/image@snap
Is 'preserve' and 'unpreserve' the verbiage we want to use here? Not sure
I have a better suggestion, but
Hi,
On my dev cluster (10 nodes, 40 OSD's) I'm still trying to run Ceph on
btrfs, but over the last couple of months I've lost multiple OSD's due
to btrfs.
On my nodes I've set kernel.panic=60 so that whenever a kernel panic
occurs I get the node back within two minutes.
Now, over the
On 6/16/12 1:46 PM, Wido den Hollander wrote:
Hi,
On my dev cluster (10 nodes, 40 OSD's) I'm still trying to run Ceph on
btrfs, but over the last couple of months I've lost multiple OSD's due
to btrfs.
On my nodes I've set kernel.panic=60 so that whenever a kernel panic
occurs I get the node