On Thu, Jan 18, 2007 at 07:22:17PM -0600, Daniel Korstad wrote:
Lines 579 thru 582
578 (sbv + 1024 + sizeof(struct bitmap_super_s));
579 printf("Size was %llu\n", __le64_to_cpu(sb->data_size));
580 sb->data_size = __cpu_to_le64(
581 misc->device_size - __le64_to_cpu(sb->data_offse t));
582 print
On 1/18/07, Yuri Tikhonov <[EMAIL PROTECTED]> wrote:
Hello, Dan.
Hello.
It seems there is a bug in your 06.11.30 raid acceleration patch-set. I tried
to run the
Linux s/w RAID-5 driver patched with your 06.11.30 patch-set and
found that it fails during
write operations when the RAID-5 ar
On Wed, 17 Jan 2007, Sevrin Robstad wrote:
> I'm suffering from bad performance on my RAID5.
>
> a "echo check >/sys/block/md0/md/sync_action"
>
> gives a speed at only about 5000K/sec , and HIGH load average :
>
> # uptime
> 20:03:55 up 8 days, 19:55, 1 user, load average: 11.70, 4.04, 1.52
I have a Fedora Core 4 on a 64 bit system running a x86_64 FC4;
[EMAIL PROTECTED] ~]# uname -a
Linux gateway.korstad.net 2.6.17-1.2142_FC4 #1 Tue Jul 11 22:41:06 EDT 2006
x86_64 x86_64 x86_64 GNU/Linux
Running;
[EMAIL PROTECTED] ~]# mdadm --version
mdadm - v2.5.4 - 13 October 2006
I though
) Steve Cousins wrote:
Sevrin Robstad wrote:
I'm suffering from bad performance on my RAID5.
a "echo check >/sys/block/md0/md/sync_action"
gives a speed at only about 5000K/sec , and HIGH load average :
What do you get when you try something like:
time dd if=/dev/zero of=/mount-point/test.d
Sevrin Robstad wrote:
I've tried to increase the cache size - I can't measure any
difference.
You probably won't help small writes, but large writes will go faster
with a stripe cache of size num_disks*chunk_size*2 or larger.
Raz Ben-Jehuda(caro) wrote:
did u increase the stripe cache
Mark Hahn wrote:
Chunk Size : 256K
well, that's pretty big. it means 6*256K is necessary to do a
whole-stripe update; your stripe cache may be too small to be effective.
If they are on the PCI bus, that is about right, you probably should
be getting 10-15MB/s, but it is about right. I
Steve Cousins wrote:
Sevrin Robstad wrote:
I'm suffering from bad performance on my RAID5.
a "echo check >/sys/block/md0/md/sync_action"
gives a speed at only about 5000K/sec , and HIGH load average :
What do you get when you try something like:
time dd if=/dev/zero of=/mount-point/test.dat
gives a speed at only about 5000K/sec , and HIGH load average :
# uptime
20:03:55 up 8 days, 19:55, 1 user, load average: 11.70, 4.04, 1.52
loadav is a bit misleading - it doesn't mean you had >11 runnable jobs.
you might just have more jobs waiting on IO, being starved by the
IO done by resy
iorder to understand what is going in your system you should:
1. determine the access pattern to the volume. meaning:
sequetial ? random access ?
sync io ? async io ?
mostly read ? mostly write ?
Are you using small buffers ? big buffers ?
2. you should test the controller capabi
This email lists some known regressions in 2.6.20-rc5 compared to 2.6.19.
If you find your name in the Cc header, you are either submitter of one
of the bugs, maintainer of an affectected subsystem or driver, a patch
of you caused a breakage or I'm considering you in any other way possibly
involve
Sevrin Robstad wrote:
I'm suffering from bad performance on my RAID5.
a "echo check >/sys/block/md0/md/sync_action"
gives a speed at only about 5000K/sec , and HIGH load average :
What do you get when you try something like:
time dd if=/dev/zero of=/mount-point/test.dat bs=1024k count=1024
I've tried to increase the cache size - I can't measure any difference.
Raz Ben-Jehuda(caro) wrote:
did u increase the stripe cache size ?
On 1/18/07, Justin Piszcz <[EMAIL PROTECTED]> wrote:
Sevrin Robstad wrote:
> I'm suffering from bad performance on my RAID5.
>
> a "echo check >/sys/
Justin Piszcz wrote:
I'm suffering from bad performance on my RAID5.
a "echo check >/sys/block/md0/md/sync_action"
gives a speed at only about 5000K/sec , and HIGH load average :
# uptime
20:03:55 up 8 days, 19:55, 1 user, load average: 11.70, 4.04, 1.52
kernel is 2.6.18.1.2257.fc5
mdadm is
Hi all,
I've hit the following bug while unmounting a xfs partition
--- [cut here ] - [please bite here ] -
Kernel BUG at drivers/md/md.c:5035
invalid opcode: [1] SMP
CPU 0
Modules linked in: unionfs sbp2 ohci1394 ieee1394 raid456 xor
w83627ehf i2c_isa i2c_core
Pid:
did u increase the stripe cache size ?
On 1/18/07, Justin Piszcz <[EMAIL PROTECTED]> wrote:
Sevrin Robstad wrote:
> I'm suffering from bad performance on my RAID5.
>
> a "echo check >/sys/block/md0/md/sync_action"
>
> gives a speed at only about 5000K/sec , and HIGH load average :
>
> # uptime
Hello, Dan.
It seems there is a bug in your 06.11.30 raid acceleration patch-set. I tried
to run the Linux s/w RAID-5 driver patched with your 06.11.30 patch-set and
found that it fails during write operations when the RAID-5 array consists of 6
or more number of drives (I tested up to 8 dri
17 matches
Mail list logo