Hi,
the ceph cluster is running under heavy load for the last 13 hours
without a problem, dmesg is empty and the performance is good.
-martin
Am 23.05.2012 21:12, schrieb Martin Mailand:
this patch is running for 3 hours without a Bug and without the Warning.
I will let it run overnight
Hi Josef,
this patch is running for 3 hours without a Bug and without the Warning.
I will let it run overnight and report tomorrow.
It looks very good ;-)
-martin
Am 23.05.2012 17:02, schrieb Josef Bacik:
Ok give this a shot, it should do it. Thanks,
--
To unsubscribe from this list: send
Hi Josef,
there was one line before the bug.
[ 995.725105] couldn't find orphan item for 524
Am 18.05.2012 16:48, schrieb Josef Bacik:
Ok hopefully this will print something out that makes sense. Thanks,
-martin
[ 241.754693] Btrfs loaded
[ 241.755148] device fsid
Hi Josef,
now I get
[ 2081.142669] couldn't find orphan item for 2039, nlink 1, root 269,
root being deleted no
-martin
Am 18.05.2012 21:01, schrieb Josef Bacik:
*sigh* ok try this, hopefully it will point me in the right direction. Thanks,
[ 126.389847] Btrfs loaded
[ 126.390284]
Hi,
I got the same Warning but triggered it differently, I created a new
cephfs on top of btrfs via mkcephfs, the command than hangs.
[ 100.643838] Btrfs loaded
[ 100.644313] device fsid 49b89a47-76a0-45cf-9e4a-a7e1f4c64bb8 devid 1
transid 4 /dev/sdc
[ 100.645523] btrfs: setting nodatacow
Hi Josef,
somehow I still get the kernel Bug messages, I used your patch from the
16th against rc7.
-martin
Am 16.05.2012 21:20, schrieb Josef Bacik:
Hrm ok so I finally got some time to try and debug it and let the test run a
good long while (5 hours almost) and I couldn't hit either the
Hi Josef,
no there was nothing above. Here the is another dmesg output.
Was there anything above those messages? There should have been a WARN_ON() or
something. If not thats fine, I just need to know one way or the other so I can
figure out what to do next. Thanks,
Josef
-martin
[
Hi Josef,
I hit exact the same bug as Christian with your last patch.
-martin
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Josef,
Am 11.05.2012 21:16, schrieb Josef Bacik:
Heh duh, sorry, try this one instead. Thanks,
With this patch I got this Bug:
[ 8233.828722] [ cut here ]
[ 8233.828737] kernel BUG at fs/btrfs/inode.c:2217!
[ 8233.828746] invalid opcode: [#1] SMP
[
Hi Josef,
Am 11.05.2012 15:31, schrieb Josef Bacik:
That previous patch was against btrfs-next, this patch is against 3.4-rc6 if you
are on mainline. Thanks,
I tried your patch against mainline, after a few minutes I hit this bug.
[ 1078.523655] [ cut here ]
[
Hi
I tried the branch on one of my ceph osd, and there is a big difference
in the performance.
The average request size stayed high, but after around a hour the kernel
crashed.
IOstat
http://pastebin.com/xjuriJ6J
Kernel trace
http://pastebin.com/SYE95GgH
-martin
Am 23.01.2012 19:50,
Hi Chris,
great to hear that, could you give me a ping if you fixed it, than I can
retry it?
-martin
Am 24.01.2012 20:40, schrieb Chris Mason:
On Tue, Jan 24, 2012 at 08:15:58PM +0100, Martin Mailand wrote:
Hi
I tried the branch on one of my ceph osd, and there is a big
difference
: Martin Mailand mar...@tuxadero.com
Antwort an: mar...@tuxadero.com
An: Sage Weil s...@newdream.net
Kopie (CC): Christian Brunner c...@muc.de, ceph-de...@vger.kernel.org,
linux-btrfs@vger.kernel.org
Hi,
I have more or less the same setup as Christian and I suffer the same
problems.
But as far as I can
Hi Stefan,
I think the machine has enough ram.
root@s-brick-003:~# free -m
total used free sharedbuffers cached
Mem: 3924 2401 1522 0 42 2115
-/+ buffers/cache:243 3680
Swap: 1951 0
Hi Anand,
I changed the replication level of the rbd pool, from one to two.
ceph osd pool set rbd size 2
And then during the sync the bug happened, but today I could not
reproduce it.
So I do not have a testcase for you.
Best Regards,
martin
Am 19.10.2011 17:02, schrieb Anand Jain:
I
Am 19.10.2011 11:49, schrieb David Sterba:
On Tue, Oct 18, 2011 at 10:04:01PM +0200, Martin Mailand wrote:
[28997.273289] [ cut here ]
[28997.282916] kernel BUG at fs/btrfs/inode.c:1163!
1119 fi = btrfs_item_ptr(leaf, path-slots[0],
1120
Hi,
on one of my OSDs the ceph-osd task hung for more than 120 sec. The OSD
had almost no load, therefore it cannot be an overload problem. I think
it is a btrfs problem, could someone clarify it?
This was in the dmesg.
[29280.890040] INFO: task btrfs-cleaner:1708 blocked for more than 120
Hi,
I have high IO-Wait on the ods (ceph), the osd are running a v3.1-rc9
kernel.
I also experience high IO-rates, around 500IO/s reported via iostat.
Device: rrqm/s wrqm/s r/s w/srkB/swkB/s
avgrq-sz avgqu-sz await r_await w_await svctm %util
sda
Josef Bacik:
On Thu, Sep 15, 2011 at 11:44:09AM -0700, Sage Weil wrote:
On Tue, 13 Sep 2011, Liu Bo wrote:
On 09/11/2011 05:47 AM, Martin Mailand wrote:
Hi
I am hitting this Warning reproducible, the workload is a ceph osd,
kernel ist 3.1.0-rc5.
Have posted a patch for this:
http://marc.info
-delayed_ref_updates;
trans-delayed_ref_updates = 0;
But on the other hand I am quite new to git, how could I get your latest
commit?
Best Regards,
Martin
Am 16.09.2011 16:37, schrieb Josef Bacik:
On 09/16/2011 10:09 AM, Martin Mailand wrote:
Hi Josef,
after a quick test it seems that I
20 matches
Mail list logo