On Thursday 02 February 2012 wrote Gregory Farnum:
On Wed, Feb 1, 2012 at 9:02 AM, Amon Ott a@m-privacy.de wrote:
ceph should have recovered here. Might also be caused by this setting
that I tried for a while, it is off now:
mds standby replay = true
With this setting, if the active
Hi Josh,
Thank you for reply !
This might mean the rbd image list object can't be read for some
reason, or the rbd tool is doing something weird that the rados tool
isn't. Can you share the output of 'ceph -s' and
'rbd ls --log-to-stderr --debug-ms 1 --debug-objecter 20 --debug-monc 20
I messed up a crush map the other day, mixing components of different
types in a single rule. The crushmap compiler didn't complain, but mons
and osds would crash when applying those rules. I had to use this patch
to recover the cluster. Only the second hunk was relevant, but I
figured a BUG_ON
Return -EINVAL rather than panic if iinfo-symlink_len and
inode-i_size do not match.
Also use kstrndup rather than kmalloc/memcpy.
Signed-off-by: Xi Wang xi.w...@gmail.com
---
fs/ceph/inode.c | 11 ++-
1 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/fs/ceph/inode.c
On 02/02/2012 05:28 PM, Gregory Farnum wrote:
On Thu, Feb 2, 2012 at 12:22 PM, Jim Schuttjasc...@sandia.gov wrote:
I found 0 instances of waiting for commit in all my OSD logs for my last
run.
So I never waited on the journal?
Looks like it. Interesting.
So far I'm looking at two
On 02/03/2012 12:51 AM, Masuko Tomoya wrote:
Hi Josh,
Thank you for reply !
This might mean the rbd image list object can't be read for some
reason, or the rbd tool is doing something weird that the rados tool
isn't. Can you share the output of 'ceph -s' and
'rbd ls --log-to-stderr --debug-ms
On Feb 3, 2012, at 8:18 AM, Jim Schutt jasc...@sandia.gov wrote:
On 02/02/2012 05:28 PM, Gregory Farnum wrote:
On Thu, Feb 2, 2012 at 12:22 PM, Jim Schuttjasc...@sandia.gov wrote:
I found 0 instances of waiting for commit in all my OSD logs for my last
run.
So I never waited on the
On Fri, 3 Feb 2012, Jim Schutt wrote:
On 02/02/2012 05:28 PM, Gregory Farnum wrote:
On Thu, Feb 2, 2012 at 12:22 PM, Jim Schuttjasc...@sandia.gov wrote:
I found 0 instances of waiting for commit in all my OSD logs for my last
run.
So I never waited on the journal?
Looks like
I have a Windows 7 guest running under kvm/libvirt with RBD as a
backend to a cluster of 3 OSDs. With this setup, I am seeing behavior
that looks suspiciously like disk corruption in the guest VM executing
some of our workloads.
For instance, in one occurance, there is a python function that
Hi List,
one of my test mds servers died a few days ago. (hardware crash) I will
not buy a new one.
Is there any chance to remove this laggy mds ?
2012-02-03 20:38:53.801623 mds e86436: 2/2/1 up
{0=0=up:resolve,1=0=up:resolve(laggy or crashed)}
2012-02-03 20:39:08.943880 mds e86437: 2/2/1
On 02/03/2012 10:19 AM, Josh Pieper wrote:
I have a Windows 7 guest running under kvm/libvirt with RBD as a
backend to a cluster of 3 OSDs. With this setup, I am seeing behavior
that looks suspiciously like disk corruption in the guest VM executing
some of our workloads.
For instance, in one
Josh Durgin wrote:
On 02/03/2012 10:19 AM, Josh Pieper wrote:
I have a Windows 7 guest running under kvm/libvirt with RBD as a
backend to a cluster of 3 OSDs. With this setup, I am seeing behavior
that looks suspiciously like disk corruption in the guest VM executing
some of our workloads.
On Fri, 2012-02-03 at 09:55 -0500, Xi Wang wrote:
Return -EINVAL rather than panic if iinfo-symlink_len and
inode-i_size do not match.
Also use kstrndup rather than kmalloc/memcpy.
Signed-off-by: Xi Wang xi.w...@gmail.com
Looks good, though it might good to at least call
WARN_ON(). What
Return -EINVAL rather than panic if iinfo-symlink_len and inode-i_size
do not match.
Also use kstrndup rather than kmalloc/memcpy.
Signed-off-by: Xi Wang xi.w...@gmail.com
---
fs/ceph/inode.c | 11 ++-
1 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/fs/ceph/inode.c
On Fri, 2012-02-03 at 15:49 -0500, Xi Wang wrote:
On Feb 3, 2012, at 3:24 PM, Alex Elder wrote:
Looks good, though it might good to at least call
WARN_ON(). What do you think?
Sounds good to me. I will send a v2. Thanks.
No need. I can do it for you. -Alex
- xi
--
To unsubscribe
On Fri, Feb 3, 2012 at 11:48, Jens Rehpöhler j...@shadow.gt.owl.de wrote:
one of my test mds servers died a few days ago. (hardware crash) I will
not buy a new one.
Is there any chance to remove this laggy mds ?
Start the mds daemon somewhere. If you don't want to fix/replace the
hardware,
On Fri, Feb 3, 2012 at 1:19 PM, Tommi Virtanen
tommi.virta...@dreamhost.com wrote:
On Fri, Feb 3, 2012 at 11:48, Jens Rehpöhler j...@shadow.gt.owl.de wrote:
one of my test mds servers died a few days ago. (hardware crash) I will
not buy a new one.
Is there any chance to remove this laggy mds
Hi Josh,
Thank you for your comments.
debug osd = 20
debug ms = 1
debug filestore = 20
I added this to the osd section of ceph.conf and ran /etc/init.d/ceph
stopstart.
The output of OSD.log when 'rbd list' was executed is below.
-
2012-02-04 04:29:22.457990 7fe0e08fb710 osd.0
On 02/03/2012 02:14 PM, Masuko Tomoya wrote:
Hi Josh,
Thank you for your comments.
debug osd = 20
debug ms = 1
debug filestore = 20
I added this to the osd section of ceph.conf and ran /etc/init.d/ceph
stopstart.
The output of OSD.log when 'rbd list' was executed is below.
Hi,
The output of 'ceph pg dump' is below.
root@ceph01:~# ceph pg dump
2012-02-04 07:50:15.453151 mon - [pg,dump]
2012-02-04 07:50:15.453734 mon.0 - 'dumped all in format plain' (0)
version 63
last_osdmap_epoch 37
last_pg_scan 1
full_ratio 0.95
nearfull_ratio 0.85
pg_stat objects mip degr
On 02/03/2012 02:54 PM, Masuko Tomoya wrote:
Hi,
The output of 'ceph pg dump' is below.
root@ceph01:~# ceph pg dump
2012-02-04 07:50:15.453151 mon- [pg,dump]
2012-02-04 07:50:15.453734 mon.0 - 'dumped all in format plain' (0)
version 63
last_osdmap_epoch 37
last_pg_scan 1
full_ratio 0.95
Hi,
I upgraded ceph to 0.41 and re-running mkcephfs.
I found my issue is fixed.
-
root@ceph01:~# rbd list
pool rbd doesn't contain rbd images
root@ceph01:~# rbd create test --size 1024
root@ceph01:~# rbd list
test
-
Josh, thank you for your advices.
2012/2/3 Josh Durgin
Hi, all.
I'm trying to attach rbd volume from instance on KVM.
But I have problem.
Could you help me ?
---
I tried to attach rbd volume on ceph01 to instance on compute1 with
virsh command.
root@compute1:~# virsh attach-device test-ub16 /root/testvolume.xml
error: Failed to attach device from
Hi, all.
I'm trying to attach rbd volume from instance on KVM.
But I have problem.
Could you help me ?
---
I tried to attach rbd volume on ceph01 to instance on compute1 with
virsh command.
root@compute1:~# virsh attach-device test-ub16 /root/testvolume.xml
error: Failed to attach device from
24 matches
Mail list logo