Justin hello
I have tested 32 to 64 bit porting of linux raid5 and xfs and LVM
it worked. though i cannot say I have tested throughly. it was a POC.
On 4/28/07, Justin Piszcz [EMAIL PROTECTED] wrote:
With correct CC'd address.
On Sat, 28 Apr 2007, Justin Piszcz wrote:
Hello--
Had a quick
On 4/16/07, Raz Ben-Jehuda(caro) [EMAIL PROTECTED] wrote:
On 4/13/07, Neil Brown [EMAIL PROTECTED] wrote:
On Saturday March 31, [EMAIL PROTECTED] wrote:
4.
I am going to work on this with other configurations, such as raid5's
with more disks and raid50. I will be happy to hear your
On 4/2/07, Dan Williams [EMAIL PROTECTED] wrote:
On 3/30/07, Raz Ben-Jehuda(caro) [EMAIL PROTECTED] wrote:
Please see bellow.
On 8/28/06, Neil Brown [EMAIL PROTECTED] wrote:
On Sunday August 13, [EMAIL PROTECTED] wrote:
well ... me again
Following your advice
I added
On 3/31/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Raz Ben-Jehuda(caro) wrote:
Please see bellow.
On 8/28/06, Neil Brown [EMAIL PROTECTED] wrote:
On Sunday August 13, [EMAIL PROTECTED] wrote:
well ... me again
Following your advice
I added a deadline for every WRITE stripe head
Please see bellow.
On 8/28/06, Neil Brown [EMAIL PROTECTED] wrote:
On Sunday August 13, [EMAIL PROTECTED] wrote:
well ... me again
Following your advice
I added a deadline for every WRITE stripe head when it is created.
in raid5_activate_delayed i checked if deadline is expired and if
I suggest you test all drives concurrently with dd.
load dd on sda , then sdb slowly one after the other and
see whether the throughput degrades. use iostat.
furtheremore, dd is not the measure for random access.
On 2/10/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Justin Piszcz wrote:
On Sat,
On 2/10/07, Eyal Lebedinsky [EMAIL PROTECTED] wrote:
I have a six-disk RAID5 over sata. First two disks are on the mobo and last four
are on a Promise SATA-II-150-TX4. The sixth disk was added recently and I
decided
to run a 'check' periodically, and started one manually to see how long it
capabilty.
meaning :
see if dd'in for each disk in the system seperately reduces the total
throughput.
On 1/18/07, Sevrin Robstad [EMAIL PROTECTED] wrote:
I've tried to increase the cache size - I can't measure any difference.
Raz Ben-Jehuda(caro) wrote:
did u increase the stripe
On 12/12/06, Bill Davidsen [EMAIL PROTECTED] wrote:
Neil Brown wrote:
On Friday December 8, [EMAIL PROTECTED] wrote:
I have measured very slow write throughput for raid5 as well, though
2.6.18 does seem to have the same problem. I'll double check and do a
git bisect and see what I can come
Furthremore , hw controller are much less feaure rich than sw raid.
many different stripe sizes, stripe cache tunning
On 25 Aug 2006 23:50:34 -0400, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Hardware RAID can be (!= is) more tolerant of serious drive failures
where a single drive locks
well ... me again
Following your advice
I added a deadline for every WRITE stripe head when it is created.
in raid5_activate_delayed i checked if deadline is expired and if not i am
setting the sh to prereadactive mode as .
This small fix ( and in few other places in the code) reduced the
Neil hello.
you say raid5.h:
...
* Whenever the delayed queue is empty and the device is not plugged, we
* move any strips from delayed to handle and clear the DELAYED flag
and set PREREAD_ACTIVE.
...
i do not understand how can one move from delayed if delayed is empty .
thank you
--
Raz
-
Neil hello.
I have been looking at the raid5 code trying to understand why writes
performance is so poor.
If I am not mistaken here, It seems that you issue a write in size of
one page an no more no matter what buffer size I am using .
1. Is this page is directed only to parity disk ?
2. How
Neil hello
if i am not mistaken here:
in first instance of : if(bi) ...
...
you return without setting to NULL
+static struct bio *remove_bio_from_retry(raid5_conf_t *conf)
+{
+ struct bio *bi;
+
+ bi = conf-retry_read_aligned;
+ if (bi) {
--
Neil hello
Sorry for the delay. too many things to do.
I have implemented all said in :
http://www.spinics.net/lists/raid/msg11838.html
As always I have some questions:
1. mergeable_bvec
I did not understand first i must admit. now i do not see how it
differs from the
one of raid0.
Neil hello.
1.
i have applied the common path according to
http://www.spinics.net/lists/raid/msg11838.html as much as i can.
it looks ok in terms of throughput.
before i continue to a non common path ( step 3 ) i do not understand
raid0_mergeable_bvec entirely.
as i understand the code checks
Neil hello
I am measuring read performance of two raid5 with 7 sata disks, chunk size 1MB.
when i set the stripe_cache_size to 4096 i get 240 MB/s. IO'ing from
the two raids ended with 270 MB/s.
i have added a code in make_request which passes the raid5 logic in
the case of read.
it looks like
may be lustre
On 4/13/06, Erik Mouw [EMAIL PROTECTED] wrote:
On Mon, Apr 10, 2006 at 05:24:34PM -0400, Jon Miller wrote:
I have two machines which have redundant paths to the same shared scsi
disk. I've had no problem creating the multipath'ed device md0 to
handle my redundant pathing. But
is the specific slab that is used for the caching?
Thanks,
Jon
On 4/13/06, Raz Ben-Jehuda(caro) [EMAIL PROTECTED] wrote:
may be lustre
On 4/13/06, Erik Mouw [EMAIL PROTECTED] wrote:
On Mon, Apr 10, 2006 at 05:24:34PM -0400, Jon Miller wrote:
I have two machines which have redundant paths
Neil/Jens Hello.
Hope is this not too much bother for you.
Question: how does the psuedo device ( /dev/md ) change the
IOs sizes going down into the disks ?
Explanation:
I am using software raid5 , chunk size is 1024K, 4 disks.
I have made a hook in make_request inorder to bypass
the raid5 IO
I was refering to bios reaching make_request in raid5.c .
I would be more precise.
I am dd'ing dd if=/dev/md1 of=/dev/zero bs=1M count=1 skip=10
I have added the following printk in make_request printk (%d:,bio-bi_size)
I am getting sector sizes. 512:512:512:512:512
I suppose they gathered in
man .. very very good.
blockdev --getsz says 512.
On 3/29/06, Neil Brown [EMAIL PROTECTED] wrote:
On Wednesday March 29, [EMAIL PROTECTED] wrote:
I was refering to bios reaching make_request in raid5.c .
I would be more precise.
I am dd'ing dd if=/dev/md1 of=/dev/zero bs=1M count=1
i have playing with raid5 and i noticed that the arriving bios sizes
are 1 sector.
why is that and where is it set ?
thank you
--
Raz
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at
Neil.
what is the stripe_cache exacly ?
First , here are some numbers.
Setting it to 1024 gives me 85 MB/s.
Setting it to 4096 gives me 105 MB/s.
Setting it to 8192 gives me 115 MB/s.
the md.txt does not say much about it just that it is the number of
entries.
here are some tests i have made:
Neil Hello .
I have a performance question.
I am using raid5 stripe size 1024K over 4 disks.
I am benchmarking it with an asynchronous tester.
This tester submits 100 IOs of size of 1024 K -- as the stripe size.
It reads raw io from the device, no file system is involved.
I am making the
it reads raw. no filesystem whatsover.
On 3/6/06, Gordon Henderson [EMAIL PROTECTED] wrote:
On Mon, 6 Mar 2006, Raz Ben-Jehuda(caro) wrote:
Neil Hello .
I have a performance question.
I am using raid5 stripe size 1024K over 4 disks.
I am benchmarking it with an asynchronous tester
Is NCQ supported when setting the controller to JBOD instead of using HW raid?
On 3/5/06, Eric D. Mudama [EMAIL PROTECTED] wrote:
On 3/4/06, Steve Byan [EMAIL PROTECTED] wrote:
On Mar 4, 2006, at 2:10 PM, Jeff Garzik wrote:
Measurements on NCQ in the field show a distinct performance
Thank you Mr Garzik.
Is there a list of all drivers and there features they give ?
Raz.
On 3/2/06, Jeff Garzik [EMAIL PROTECTED] wrote:
Jens Axboe wrote:
(don't top post)
On Thu, Mar 02 2006, Raz Ben-Jehuda(caro) wrote:
i can see the NCQ realy bother people.
i am using a promise
, JaniD++ [EMAIL PROTECTED] wrote:
- Original Message -
From: Raz Ben-Jehuda(caro) [EMAIL PROTECTED]
To: JaniD++ [EMAIL PROTECTED]
Cc: Linux RAID Mailing List linux-raid@vger.kernel.org
Sent: Wednesday, January 04, 2006 2:49 PM
Subject: Re: raid5 read performance
1. do you want
I guess i was not clear enough.
i am using raid5 over 3 maxtor disks. the chunk size is 1MB.
i mesured the io coming from one disk alone when I READ
from it with 1MB buffers , and i know that it is ~32MB/s.
I created raid0 over two disks and my throughput grown to
64 MB/s.
Doing the same thing
-
From: Raz Ben-Jehuda(caro) [EMAIL PROTECTED]
To: Mark Hahn [EMAIL PROTECTED]
Cc: Linux RAID Mailing List linux-raid@vger.kernel.org
Sent: Wednesday, January 04, 2006 9:14 AM
Subject: Re: raid5 read performance
I guess i was not clear enough.
i am using raid5 over 3 maxtor disks. the chunk
I am checking raid5 performance.
I am using asynchronous ios with buffer size as the stripe size.
In this case i am using a stripe size of 1M with 2+1 disks.
Unlike raid0 , raid5 drops the performance by 50% .
Why ?
Is it because it does parity checkings ?
thank you
--
Raz
-
To unsubscribe from
what wrt stands for ?
On 12/29/05, Mark Overmeer [EMAIL PROTECTED] wrote:
* Raz Ben-Jehuda(caro) ([EMAIL PROTECTED]) [051229 10:10]:
I have tested the overhead of linux raid0.
I used two scsi atlas maxtor disks ( 147 MB) and combined them to single
raid0 volume.
The raid is striped
look at the cpu consumption.
On 11/26/05, JaniD++ [EMAIL PROTECTED] wrote:
Hello list,
I have searching the bottleneck of my system, and found something what i
cant cleanly understand.
I have use NBD with 4 disk nodes. (raidtab is the bottom of mail)
The cat /dev/nb# /dev/nullmakes ~
-
[EMAIL PROTECTED] On Behalf Of Raz Ben-Jehuda(caro)
Sent: Sunday, November 20, 2005 6:50 AM
To: Linux RAID Mailing List
Subject: comparing FreeBSD to linux
I have evaluated which is better in terms cpu load when dealing with raid.
FreeBSD vinum's or linux raid.
When i issued a huge
What sort of a test is it ? what filesystem ?
I am reading concurrently 50 files .
Are you reading one file , several files ?
On 11/21/05, Guy [EMAIL PROTECTED] wrote:
-Original Message-
From: [EMAIL PROTECTED] [mailto:linux-raid-
[EMAIL PROTECTED] On Behalf Of Raz Ben-Jehuda(caro
Well , i have tested the disk with a new tester i have written. it seems that
the ata driver causes the high cpu and not raid.
On 11/21/05, Raz Ben-Jehuda(caro) [EMAIL PROTECTED] wrote:
What sort of a test is it ? what filesystem ?
I am reading concurrently 50 files .
Are you reading one file
as nr 1
md: raid0 personality registered as nr 2
md: raid1 personality registered as nr 3
md: raid5 personality registered as nr 4
On 11/21/05, Jeff Garzik [EMAIL PROTECTED] wrote:
On Mon, Nov 21, 2005 at 10:15:11AM -0800, Raz Ben-Jehuda(caro) wrote:
Well , i have tested the disk with a new
i have encountered a weired feature of 3ware raid.
When i try to put inside an existing raid a disk which
belonged to a different 3ware raid if fail.
Any idea anyone ?
--
Raz
Long Live the Penguin
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to
39 matches
Mail list logo