not sure the cluster will be able to recover to normal
state now.
Regards,
Vladimir.
5 апреля 2012 г. 17:05 пользователь Wido den Hollander w...@widodh.nl написал:
Hi,
On 04/04/2012 08:26 PM, Borodin Vladimir wrote:
Hello Wido,
Yesterday I built from git: $ ceph -v
ceph version 0.44.1-149
Hello Wido,
Yesterday I built from git: $ ceph -v
ceph version 0.44.1-149-ge80126e
(commit:e80126ea689e9a972fbf09e8848fc4a2ade13c59)
The messages are a bit different but the problem seems to be the same.
Below is a part of the osd.48.log. Should I turn debug osd mode more
verbose or give any
Yes, Stefan. You are right. I'm not sure about the D state, but high
cpu usage is fact.
I do want to try an OSD per disk configuration but a bit later.
Thanks,
Vladimir.
2012/4/3 Stefan Kleijkers ste...@unilogicnetworks.net:
Hello,
A while back I had the same errors you are seeing. I had
Hi all.
I've read everything in ceph.newdream.net/docs and
ceph.newdream.net/wiki. I've also read some articles from
ceph.newdream.net/publications. But I haven't found answers on some
questions:
1. there is one active MDS and one in standby mode. Active MDS caches
all metadata in RAM. Does
wanted to
replicate between data centers. We don't currently support reading from
the closest replica.
On Wed, Mar 21, 2012 at 2:22 AM, Borodin Vladimir v.a.boro...@gmail.com
wrote:
Hi all.
I've read everything in ceph.newdream.net/docs and
ceph.newdream.net/wiki. I've also read some
Hi all.
One of my OSD crashes when I try to start it. I've turned on debug
osd = 20 in ceph.conf for this node and put the log here:
http://simply.name/osd.47.log. The ceph.conf file is here:
http://simply.name/ceph.conf. Is there any other information I should
show?
I've updated to 0.43