Josh Pieper wrote:
Josh Durgin wrote:
On 02/03/2012 10:19 AM, Josh Pieper wrote:
I have a Windows 7 guest running under kvm/libvirt with RBD as a
backend to a cluster of 3 OSDs. With this setup, I am seeing behavior
that looks suspiciously like disk corruption in the guest VM executing
I have a Windows 7 guest running under kvm/libvirt with RBD as a
backend to a cluster of 3 OSDs. With this setup, I am seeing behavior
that looks suspiciously like disk corruption in the guest VM executing
some of our workloads.
For instance, in one occurance, there is a python function that
Josh Durgin wrote:
On 02/03/2012 10:19 AM, Josh Pieper wrote:
I have a Windows 7 guest running under kvm/libvirt with RBD as a
backend to a cluster of 3 OSDs. With this setup, I am seeing behavior
that looks suspiciously like disk corruption in the guest VM executing
some of our workloads
'ceph osd dump' you're in the
clear.
sage
http://tracker.newdream.net/issues/1942
On Sat, 14 Jan 2012, Josh Pieper wrote:
I just upgraded our test cluster to 0.40, and immediately after
starting up get asserts in all the OSDs. I've inlined a relevant
backtrace below
I just upgraded our test cluster to 0.40, and immediately after
starting up get asserts in all the OSDs. I've inlined a relevant
backtrace below, is there anything else that would be useful for
debugging?
Our test cluster is 3 ubuntu 11.10 amd64 machines, each with a mon and
osd.
Looking at an
Sage Weil wrote:
Hi Josh,
On Sat, 14 Jan 2012, Josh Pieper wrote:
I just upgraded our test cluster to 0.40, and immediately after
starting up get asserts in all the OSDs. I've inlined a relevant
backtrace below, is there anything else that would be useful for
debugging?
Are you
Sage,
Those two are attached.
-Josh
Sage Weil wrote:
Hi Josh,
Can you attach one of your OSDmaps with the poison entries? Between
ceph osd getmap 149 -o /tmp/149
ceph osd getmap 155 -o /tmp/155
I should see one of them.
Thanks!
sage
osdmaps.tar.gz
Description:
, though my test
still needs to be better about the monitor workload it presents.
In any case, you should have better luck with the current master.
Thanks!
sage
On Sat, 19 Nov 2011, Josh Pieper wrote:
I observed the following two crashes using the same test setup I have
had
I observed the following two crashes using the same test setup I have
had for the previous reports. 3x ubuntu 11.04 amd64 nodes with an rbd
load, this time each running cc5b5e17.
Links to the full logs are below each stack trace snip.
mon/OSDMonitor.cc: In function 'MOSDMap*
Gregory Farnum wrote:
On Tue, Nov 15, 2011 at 3:55 AM, Christoph Hellwig h...@infradead.org wrote:
I hit the same when trying to bring up a test cluster on a single
physical machine. As soon as moved to vstart.sh I couldn't reproduce
it anymore.
Hmm, interesting that it doesn't happen
Hello,
I was trying to test http://tracker.newdream.net/issues/1708, but in
the process of attempting to do so, keep getting asserts in one of my
monitors.
Pretty much all I have done is bring up a simple 3 node cluster, each
with a mon, osd, and mds. All are using amd64 ubuntu 11.10, with git
9a60445de09a7c9de096cecb1b63638be52438c2 Mon Sep 17 00:00:00 2001
From: Josh Pieper j...@pobox.com
Date: Fri, 11 Nov 2011 08:19:02 -0500
Subject: [PATCH 1/2] Resolve gcc warnings.
These should have no functional changes:
* Check errors from functions that currently cannot return any
* Initialize
From e4b9ae75724022cd1a557fde967b130888a35870 Mon Sep 17 00:00:00 2001
From: Josh Pieper j...@pobox.com
Date: Fri, 11 Nov 2011 08:19:55 -0500
Subject: [PATCH 2/2] rgw: Fix some merge problems uncovered by gcc warnings:
* a refactor in e2100bce left the mod_ptr and unmod_ptr members set
13 matches
Mail list logo