:
On 04/04/2013 09:27 AM, Alexey Shvetsov wrote:
Hi all!
I'm getting mds crash on 0.60 after i upgraded to it from 0.58
release.
MDS log looks like http://bpaste.net/show/88863/
MON log looks like http://bpaste.net/show/88864/
Interesting that if we didnt started osd then mds will not crash
PS
, Alexey Shvetsov wrote:
Hi all!
I'm getting mds crash on 0.60 after i upgraded to it from 0.58 release.
MDS log looks like http://bpaste.net/show/88863/
MON log looks like http://bpaste.net/show/88864/
Interesting that if we didnt started osd then mds will not crash
PS configuration consist of 18
On 04/04/2013 09:27 AM, Alexey Shvetsov wrote:
Hi all!
I'm getting mds crash on 0.60 after i upgraded to it from 0.58 release.
MDS log looks like http://bpaste.net/show/88863/
MON log looks like http://bpaste.net/show/88864/
Interesting that if we didnt started osd then mds will not crash
PS
On Thursday, May 24, 2012 at 5:29 AM, Felix Feinhals wrote:
Hi,
i was using the Debian Packages, but i tried now from source.
I used the same version from GIT
(cb7f1c9c7520848b0899b26440ac34a8acea58d1) and compiled it. Same crash
report.
Then i applied your patch but again the same crash,
Hi,
i was using the Debian Packages, but i tried now from source.
I used the same version from GIT
(cb7f1c9c7520848b0899b26440ac34a8acea58d1) and compiled it. Same crash
report.
Then i applied your patch but again the same crash, i think the
backtrace is also the same:
(gdb) thread 1
[Switching
Hey,
ok i installed libc-dbg and run your commands now this comes up:
gdb /usr/bin/ceph-mds core
snip
GNU gdb (GDB) 7.0.1-debian
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to
On Tuesday, May 22, 2012 at 3:12 AM, Felix Feinhals wrote:
I am not quite sure on how to get you the coredump infos. I installed
all ceph-dbg packages and executed:
gdb /usr/bin/ceph-mds core
snip
GNU gdb (GDB) 7.0.1-debian
Copyright (C) 2009 Free Software Foundation, Inc.
License
Hi Josh,
i quoted the trace and some other stats in my first email, maybe it
got stuck in the spam filters.
Well next try:
snip
-3 2012-05-10 14:52:29.509940 7fb1c9351700 1 mds.0.40 handle_mds_map
i am now mds.0.40
-2 2012-05-10 14:52:29.509956 7fb1c9351700 1 mds.0.40 handle_mds_map
state
On Mon, May 21, 2012 at 5:38 AM, Felix Feinhals
f...@turtle-entertainment.de wrote:
Hi Josh,
i quoted the trace and some other stats in my first email, maybe it
got stuck in the spam filters.
Well next try:
snip
-3 2012-05-10 14:52:29.509940 7fb1c9351700 1 mds.0.40 handle_mds_map
i am
On 05/16/2012 01:11 AM, Felix Feinhals wrote:
Hi again,
anything on this Problem? Seems that the only choice for me is to
reinitialize the whole cephfs (mkcephfs...)
:(
Hi Felix, it looks like your first mail never reached the list.
2012/5/10 Felix Feinhalsf...@turtle-entertainment.de:
Hi
Hi again,
anything on this Problem? Seems that the only choice for me is to
reinitialize the whole cephfs (mkcephfs...)
:(
2012/5/10 Felix Feinhals f...@turtle-entertainment.de:
Hi List,
we installed a ceph cluster with ceph version 0.46.
3 OSDs, 3 MONs and 3 MDSs.
After copying a bunch of
This is a trace of an MDS crash. I was running a simple setup (./vstart -d -n),
and this is from out/mds.b
This is from the latest wip-getdir branch. I posted some context preceding the
crash. I have the full trace if more context is helpful.
-Noah
2011-10-28
On Sun, 3 Jul 2011, Fyodor Ustinov wrote:
Hi!
mds - 0.30
I can not to reproduce, sorry.
mds/Locker.cc: In function 'void Locker::file_excl(ScatterLock*, bool*)', in
thread '0x7fefc6c68700'
mds/Locker.cc: 3982: FAILED assert(in-get_loner() = 0
in-mds_caps_wanted.empty())
ceph
Hi!
mds - 0.30
I can not to reproduce, sorry.
mds/Locker.cc: In function 'void Locker::file_excl(ScatterLock*,
bool*)', in thread '0x7fefc6c68700'
mds/Locker.cc: 3982: FAILED assert(in-get_loner() = 0
in-mds_caps_wanted.empty())
ceph version 0.30
Which commit were you running?
sage
On Sun, 3 Jul 2011, Fyodor Ustinov wrote:
Hi!
mds - 0.30
I can not to reproduce, sorry.
mds/Locker.cc: In function 'void Locker::file_excl(ScatterLock*, bool*)', in
thread '0x7fefc6c68700'
mds/Locker.cc: 3982: FAILED assert(in-get_loner() = 0
On 07/03/2011 01:03 AM, Sage Weil wrote:
Which commit were you running?
On mds server from here:
deb http://ceph.newdream.net/debian/ natty main
deb-src http://ceph.newdream.net/debian/ natty main
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message
Hi.
2011-05-24 00:17:45.490684 7f45415e1740 ceph version 0.28.commit:
071881d7e5599571e46bda17094bb4b48691e89a. process: cmds. pid: 4424
2011-05-24 00:17:45.492293 7f453ef81700 mds-1.0 ms_handle_connect on
77.120.112.193:6789/0
2011-05-24 00:17:49.497862 7f453ef81700 mds-1.0 handle_mds_map
Hi Fyodor,
This looks like #1104. Will try to sort it out today, it should be a
simple one.
sage
On Tue, 24 May 2011, Fyodor Ustinov wrote:
Hi.
2011-05-24 00:17:45.490684 7f45415e1740 ceph version 0.28.commit:
071881d7e5599571e46bda17094bb4b48691e89a. process: cmds. pid: 4424
On 05/24/2011 01:27 AM, Sage Weil wrote:
Hi Fyodor,
This looks like #1104. Will try to sort it out today, it should be a
simple one.
sage
May be need my debug mds = 20, debug ms = 1 log?
But it is zipped - 26M. Need?
WBR,
Fyodor.
--
To unsubscribe from this list: send the line
On Tue, 24 May 2011, Fyodor Ustinov wrote:
On 05/24/2011 01:27 AM, Sage Weil wrote:
Hi Fyodor,
This looks like #1104. Will try to sort it out today, it should be a
simple one.
sage
May be need my debug mds = 20, debug ms = 1 log?
But it is zipped - 26M. Need?
If you can attach
Hi!
Sage, you receive my email about bt ?
WBR,
Fyodor.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Apr 20, 2011 at 09:00:34AM -0500, Mark Nigh wrote:
That seems to fix the problem. Both of my mds are up and active.
mds e1838: 2/2/2 up {0=up:active,1=up:active}
The only thing that doesn't seem right to me (I am not a developer
so my understanding of git is limited) is that my
...@newdream.net]
Sent: Tuesday, April 19, 2011 11:18 AM
To: Mark Nigh
Cc: ceph-devel@vger.kernel.org
Subject: Re: mds crash
Hi Mark,
This should be fixed by d55399ffec224206ea324e83bb8ead1e9ca1eddc in the
'next' branch of ceph.git. Can you test it out and see if that allows
journal replay to complete
I recently have been working with exporting ceph to NFS. I have had stability
problems with NFS (ceph is working but NFS crashes). But most recently, my mds0
will not start after one of these instances with NFS.
My setup. 2 mds, 1 mon (located on mds0), 5 osds. All running Ubuntu v10.10.
Here
Hi Mark,
This should be fixed by d55399ffec224206ea324e83bb8ead1e9ca1eddc in the
'next' branch of ceph.git. Can you test it out and see if that allows
journal replay to complete?
Thanks!
sage
http://tracker.newdream.net/issues/1019
On Tue, 19 Apr 2011, Mark Nigh wrote:
I recently have
Hi there. I tried building the latest in the testing branch (commit id
e90a3b623), and things didn't go all that well. Using a 2.6.36 client
with commit d91f2438 reverted and the ceph: fix small seq message
skipping comimt added, I was able to mount the test file system, but
when I tried to
On Tue, 9 Nov 2010, Theodore Ts'o wrote:
Hi there. I tried building the latest in the testing branch (commit id
e90a3b623), and things didn't go all that well. Using a 2.6.36 client
with commit d91f2438 reverted and the ceph: fix small seq message
skipping comimt added, I was able to mount
hi
now on git/unstable - no more git/master.
started with vstart.sh and
export CEPH_NUM_MON=1
export CEPH_NUM_OSD=1
export CEPH_NUM_MDS=3
right after first file creation (mktemp
/pathto/testspace/ceph_basiccheck_testspace.) the mds0
crashed and the kclient hangs now.
do i need the
hi
updated to new git/unstable. mds0 crashed on bonnie++.
rev: 0e67718a365b42969e785f544ea3b4258bb2407f
- Thomas
2010-09-30 20:50:37.349983 7f7ac0f83710 mds0.server dest
29 matches
Mail list logo