Hello,
I was unable to get ceph to run on centos 6.3 following the 5 minute
Same here... I was only able to build the ceph-fuse client.
Denis
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hello,
Le 09/01/2013 00:36, Gregory Farnum a écrit :
It looks like it's taking approximately forever for writes to complete
to disk; it's shutting down because threads are going off to write and
not coming back. If you set osd op thread timeout = 60 (or 120) it
might manage to churn through,
Hello,
I tried to upgrade to 0.56.1 this morning as it could help with
recovery. No luck so far...
What's wrong with your primary OSD?
I don't know what's really wrong. The disk seems fine.
In general they shouldn't really be crashing that frequently and if you've got
a new bug we'd
Hello,
I'm wondering if I can get every rb.0.8e10.3e2219d7.* from the OSD
drive and cat them together and get back a usable raw volume from which
I could get back my data ?
Everything seems to be there but I don't know the order of the rbd
objects. Are the last bytes of the file name the
Hello,
What error message do you get when you try and turn it on? If the
daemon is crashing, what is the backtrace?
The daemon is crashing. Here is the full log if you want to take a look
: http://vps.ledeuns.net/ceph-osd.0.log.gz
The RBD rebuild script helped to get the data back. I will
Hello all,
I'm using Ceph 0.55.1 on a Debian Wheezy (1 mon, 1 mds et 3 osd over
btrfs) and every once in a while, an OSD process crashes (almost never
the same osd crashes).
This time I had 2 osd crash in a row and so I only had one replicate. I
could bring the 2 crashed osd up and it started
Hello all,
I'm using Ceph 0.55.1 on a Debian Wheezy (1 mon, 1 mds et 3 osd over
btrfs) and every once in a while, an OSD process crashes (almost never
the same osd crashes).
This time I had 2 osd crash in a row and so I only had one replicate. I
could bring the 2 crashed osd up and it started
Hello all,
I noticed that removing files on CephFS doesn't reclaim free space using
ceph version 0.52(commit:e48859474c4944d4ff201ddc9f5fd400e8898173)
I created 2 500GB files on CephFS (mounted with ceph-fuse) and then
removed them. Now rados df -p data shows a 1GB usage :
Le 06/11/2012 20:24, Gregory Farnum a écrit :
Deleting files on CephFS doesn't instantaneously remove the underlying
objects because they could still be in use by other clients, and
removal takes time proportional to the size of the file. Instead the
MDS queues it up to be removed
Hello,
As far I'm concerned I think that 12 disks per servers is way too much.
From your experience, what is the correct number of OSD per server ?
Denis
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More
Hello all,
Is it possible/recommended to mix versions ? ie. OSD with 0.48 and MON
with 0.52 or some OSD with 0.48 and some with 0.52 ?
Denis
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hello Mark,
Not sure what version of glibc Wheezy has, but try to make sure you have
one that supports syncfs (you'll also need a semi-new kernel, 3.0+
should be fine).
Wheezy has a fairly recent kernel :
# uname -a
Linux ceph-osd-0 3.2.0-3-amd64 #1 SMP Mon Jul 23 02:45:17 UTC 2012
x86_64
12 matches
Mail list logo