A few simple questions about the 2.6.16+ kernel and software RAID.
Does software RAID in the 2.6.16 kernel take advantage of SMP?
Does software RAID take advantage of 64-bit CPU(s)?
If there are any good web sites that cover this information, a link
would be GREAT!
-Adam Talbot
-
To unsubscribe
On Monday May 22, [EMAIL PROTECTED] wrote:
A few simple questions about the 2.6.16+ kernel and software RAID.
Does software RAID in the 2.6.16 kernel take advantage of SMP?
Not exactly. RAID5/6 tends to use just one cpu for parity
calculations, but that frees up other cpus for doing other
Neil hello.
1.
i have applied the common path according to
http://www.spinics.net/lists/raid/msg11838.html as much as i can.
it looks ok in terms of throughput.
before i continue to a non common path ( step 3 ) i do not understand
raid0_mergeable_bvec entirely.
as i understand the code checks
On Tuesday May 23, [EMAIL PROTECTED] wrote:
Neil hello.
1.
i have applied the common path according to
http://www.spinics.net/lists/raid/msg11838.html as much as i can.
Great. I look forward to seeing the results.
it looks ok in terms of throughput.
before i continue to a non common
-Neil
I was not looking for any direct advantage. It is more a money VS
performance thing. I have a old dual proc Opteron motherboard. I am
going with 64-bit, but it is much cheaper if I just go buy a nice single
proc board instead of buying two Opterons for my dual proc board. If I
could get
Bruno Seoane wrote:
mdadm -C -l5 -n5
-c=128 /dev/md0 /dev/sdb1 /dev/sdd1 /dev/sde1 /dev/sdc1 /dev/sda1
I took the devices order from the mdadm output on a working device. Is this
the way it's supposed to be the command assembled?
Is there anything alse I should consider or any other
The following is a revision of the patch with the suggested changes.
-Eliminate the wait_for_block_ops queue
-Simplify the code by tracking the operations at the stripe level not
the block level
-Integrate the work struct into stripe_head (remove the need for memory
allocation)
-Make the work
Hi,
I upgraded my kernel from 2.6.15.6 to 2.6.16.16 and now the 'iostat -x
1' permanently shows 100% utilisation on each disk that member of an md
array. I asked my friend who using 3 boxes with 2.6.16.2 2.6.16.9
2.6.16.11 and raid1, he's reported the same too. it works for anyone?
I don't