On Sunday, 29 October 2006 at 23:05:32 -0800, R. B. Riddick wrote:
> --- Greg 'groggy' Lehey <[EMAIL PROTECTED]> wrote:
>> "Sufficiently large data blocks" equates to several megabytes.
>> Currently MAXPHYS, the largest transfer request that would get to the
>> bio layer, is 131072 bytes. This wou
--- Greg 'groggy' Lehey <[EMAIL PROTECTED]> wrote:
> "Sufficiently large data blocks" equates to several megabytes.
> Currently MAXPHYS, the largest transfer request that would get to the
> bio layer, is 131072 bytes. This would imply a stripe size of not
> more than 32 kB for a five disk array, w
On Monday, 30 October 2006 at 7:11:29 +0200, Petri Helenius wrote:
> Greg 'groggy' Lehey wrote:
>>
>> Single stream tests aren't very good examples for RAID-5, because it
>> performs writes in two steps: first it reads the old data, then it
>> writes the new data.
>
> If it really does it this way
Greg 'groggy' Lehey wrote:
Single stream tests aren't very good examples for RAID-5, because it
performs writes in two steps: first it reads the old data, then it
writes the new data.
If it really does it this way, instead doing write-only when writing
sufficiently large blocks, that would e
On Sunday, 29 October 2006 at 11:20:33 -0600, Steve Peterson wrote:
> Petri -- thanks for the idea.
It would be a good idea to quote it. Following this thread is almost
impossible.
> I ran 2 dds in parallel; they took roughly twice as long in clock
> time, and had about 1/2 the throughput of the
On Saturday, 28 October 2006 at 22:19:17 +0300, Petri Helenius wrote:
>
> According to my understanding vinum does not overlap requests to
> multiple disks when running in raid5 configuration
Yes, it does. I suspect that gvinum does too.
> so you're not going to achieve good numbers with just "s
Steve Peterson wrote:
I guess the fundamental question is this -- if I have a 4 disk
subsystem that supports an aggregate ~100MB/sec transfer raw to the
underlying disks, is it reasonable to expect a ~5MB/sec transfer rate
for a RAID5 hosted on that subsystem -- a 95% overhead.
Absolutely not,
Steve Peterson wrote:
Petri -- thanks for the idea.
I ran 2 dds in parallel; they took roughly twice as long in clock
time, and had about 1/2 the throughput of the single dd. On my system
it doesn't look like how the work is offered to the disk subsystem
matters.
This is the thing I did wit
Petri -- thanks for the idea.
I ran 2 dds in parallel; they took roughly twice as long in clock
time, and had about 1/2 the throughput of the single dd. On my
system it doesn't look like how the work is offered to the disk
subsystem matters.
# time dd if=/dev/zero of=blort1 bs=1m count=100