Hi, Tommi:
What I see is like this.
lrwxrwxrwx 1 root root 10 2011-12-07 16:48 foo1:0 -> ../../rbd0
lrwxrwxrwx 1 root root 10 2011-12-07 16:50 foo2:1 -> ../../rbd1
The extra number (:0 and :1) behind the image name make the problem still
exists.
-Original Message-
From: Tommi Virtanen
On Wed, 7 Dec 2011, Noah Watkins wrote:
> Stack trace from a simple Ceph client that does nothing more than open a file
> and call ceph_read(...) on it.
This just looks like a crash we've periodically been seeing in qa, but
haven't been able to reproduce with logging (or diagnose from the cores).
Stack trace from a simple Ceph client that does nothing more than open a
file and call ceph_read(...) on it.
- Noah
Hey,
I just wanted to note that I got this failure occasionally when I was
running ceph_read on issdm-29
@issdm-29:~$ time ./ceph_read /etc/ceph/ceph.conf /john.1gb.bin
client/
Noah:
Branch wip-messenger contains the patch you want (plus one or two
cleanups), or apply the below. Should handle it, although I'd like to
do a more thorough cleanup of this problem at a later time.
-Greg
>From 8c4f4748e8b683f5b4ea939295793421c0ab7b61 Mon Sep 17 00:00:00 2001
From: Greg Farnum
2011/12/5 Sage Weil :
> Hi Christian,
>
> On Mon, 5 Dec 2011, Christian Brunner wrote:
>> I've just updated to 0.39. Everything seems to be fine, except one
>> minor thing I noticed:
>>
>> 'ceph -w' output stops after a few minutes. With "debug ms = 1" it
>> ends with these lines:
>>
>> 2011-12-05
On Wed, Dec 7, 2011 at 10:00, Tommi Virtanen
wrote:
> def crush(pg):
> all_osds = ['osd.0', 'osd.1', 'osd.2', ...]
> result = []
> # size is the number of copies; primary+replicas
> while len(result) < size:
> r = get_random_number()
Err I mean pseudorandom, based on pg. So bas
On Wed, Dec 7, 2011 at 06:38, Guido Winkelmann
wrote:
> Am Dienstag, 6. Dezember 2011, 11:51:45 schrieben Sie:
>> PG = "placement group". When placing data in the cluster, objects are
>> mapped into PGs, and those PGs are mapped onto OSDs.
>
> How does the Object->PG mapping look like, do you map
You will likely have to recompile your kernel. For LIO, recent
kernels should have the option available. For SCST, you'll need
compile your kernel with some patches and likely some userspace
components. If you look at the SCST user guide docs, I've verified
that they are correct and work with rb
On Tue, Dec 6, 2011 at 18:38, wrote:
> I have another little question. Could I map specific image to specific device?
>
> For example:
> Before re-boot
> id pool image snap device
> 0 rbd foo1 - /dev/rbd0
> 1 rbd foo3 - /dev/rbd2
>
> How could I
I pushed a patch to wip-d-lock that may fix this one, but unfortunately
don't have time to test this very carefully right now. Let us know if
that helps, or you can wait until next week.
The call path that was triggering both of these can be exercised by
restarting the ceph-mds daemon. Try ru
On Tuesday 06 December 2011 wrote Amon Ott:
> Merged in and bug seems to be fixed. No more deadlock warnings today.
Unfortunately, I got another deadlock message in the log today. Full log of
one boot time is attached.
Amon Ott
--
Dr. Amon Ott
m-privacy GmbH Tel: +49 30 24342334
Am Kö
Hi,
When building ceph 0.39 on Fedora 14, the build process fails with the
following messages:
CXXLD librados-config
/usr/bin/ld: ./.libs/libglobal.a(libcommon_la-HeartbeatMap.o): undefined
reference to symbol 'pthread_rwlock_wrlock@@GLIBC_2.2.5'
/usr/bin/ld: note: 'pthread_rwlock_wrlock@@GL
Am Dienstag, 6. Dezember 2011, 11:51:45 schrieben Sie:
> PG = "placement group". When placing data in the cluster, objects are
> mapped into PGs, and those PGs are mapped onto OSDs.
How does the Object->PG mapping look like, do you map more than one object on
one PG, or do you sometimes map an ob
Am 07.12.2011 02:05, schrieb Josh Durgin:
> The caller expects psn_tab to be NULL when there are no snapshots or
> an error occurs. This results in calling g_free on an invalid address.
>
> Reported-by: Oliver Francke
> Signed-off-by: Josh Durgin
Thanks, applied to the block branch.
Kevin
--
T
14 matches
Mail list logo