In message <[EMAIL PROTECTED]>, Bruce Evans writes:
>> The solution is to re-engineer the way that I/O buffers pass through
>> the kernel and only assign KVA when needed (for doing software parity
>> calculations, for example).
>
>How would it yield anything except complexity and negative performa
On Sun, 8 May 2005, Scott Long wrote:
Changing MAXPHYS is very dangerous, unfortunately. The root of the
problem is that kernel virtual memory (KVA) gets assigned to each I/O
buffer as it passes through the kernel. If we allow too much I/O through
at once then we have the very real possibility of
In message <[EMAIL PROTECTED]>, Scott Long writes:
>The solution is to re-engineer the way that I/O buffers pass through
>the kernel and only assign KVA when needed (for doing software parity
>calculations, for example).
I've hacked up a prototype to do this and there is no doubt that this
is the
Steven Hartland wrote:
Summary of results:
RAID0:
Changing vfs.read_max 8 -> 16 and MAXPHYS 128k -> 1M
increased read performance significantly from 129Mb/s to 199MB/s
Max raw device speed here was 234Mb/s
FS -> Raw device: 35Mb/s 14.9% performance loss
RAID5: Changing vfs.read_max 8 -> 16 produced
At 10:04 AM 04/05/2005, Steven Hartland wrote:
Did you also try the sys/param.h change that helped here.
I have not yet but will soon. The tweaking certainly does make a difference
in various numbers. I ran some extensive iozone tests overnight too see
how those numbers are effected by these twe
Did you also try the sys/param.h change that helped here.
Also when testing on FS I found bs=1024k to degrade performance
try with 64k.
Is this a raid volume? If so on my setup anything other that a 16k stripe
and performance went out the window.
For the 'time' its easier to understand if u use:
/u
At 05:31 PM 02/05/2005, Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Mike Tancsa writes:
>Using an amrc controller in RAID5, it doesnt make a difference really on
>the dd stuff - read or write-- perhaps 2-5MB difference on the faster
>side. Raw reads and writes whether I use da0 or da0
Summary of results:
RAID0:
Changing vfs.read_max 8 -> 16 and MAXPHYS 128k -> 1M
increased read performance significantly from 129Mb/s to 199MB/s
Max raw device speed here was 234Mb/s
FS -> Raw device: 35Mb/s 14.9% performance loss
RAID5:
Changing vfs.read_max 8 -> 16 produced a small increase
129M
On Mon, 2 May 2005, Poul-Henning Kamp wrote:
> In message <[EMAIL PROTECTED]>, Eric Anderson writes:
>
> >Don't mean to be terse here, but I'm talking about the same test done an
> >two different RAID5 configurations, with different disks, and not just
> >me - other users in this very thread see
> All partitions / file system creation was done using the OS default
> tool i.e. FreeBSD: sysinstall, Suse: yast
Perhaps the defaults are a bit dangerous. If you have the time, could
you post results with freebsd mounting asynchronously (add async as an
option in the fstab) and linux mounting
AW> I can produce the same effect here with
AW> dd if=/dev/zero of=/dev/null bs=400k count=1
AW> (no disc involved). It gets very nasty with bs=1m
I understand it, i want to know how to give more priority for
sound subsystem instead another dev IO.
--
Best regards,
Andrey
--- Andrey Smagin <[EMAIL PROTECTED]> wrote:
> driver :) ) or call of SB buffer fill. How to diagnose what
> uninterruptable
> process(or in system function) can to lock CPU for so much time
> (1 sound buffer - 4096 bytes 4096/176400=~23ms) ? And, please,
> help me with
> tuning of sound performanc
Hi ALL,
I not sure that it only low disk performance, but.
When i copying /usr/src tree from file server (AMD K6-2 225) to my Duron
1133(1GB RAM), speed is about 3-4MBytes/s. But played music on my PC is some
gappy. Also removing /usr/ports tree have same much effect for music. I set
large b
On 05/02/05 23:37, Petri Helenius wrote:
Robert Watson wrote:
The next thing that would be quite nice to measure is the rate of I/O
transactions per second we can get to the disk using the disk device
directly, with a minimal transaction size. I have a vague
recollection that you have to be car
Petri Helenius wrote:
I noticed that changing vfs.read_max from the default 8 to 16 has a
dramatic effect on sequential read performance. Increasing it further
did not have measurable effect.
Increasing MAXPHYS in sys/param.h from 128k to 1M increased sequential
read thruput on my MegaRAID 1600
Robert Watson wrote:
The next thing that would be quite nice to measure is the rate of I/O
transactions per second we can get to the disk using the disk device
directly, with a minimal transaction size. I have a vague
recollection that you have to be careful in Linux because their
character de
On 5/2/2005 4:56 PM, Jonathan Noack
Look at the difference in sys times for raw vs. filesystem reads. With
raw we're at 2.73s while reading from the filesystem requires 12.33s!
From my position of complete ignorance that seems like a lot...
Indeed thats why I hit on using time as well as just
On 5/2/2005 4:56 PM, Steven Hartland wrote:
- Original Message - From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
On -current and 5.4 you don't have to make partitions if you
intend to use the entire disk (and provided you don't want
to boot from it). You can simply:
newfs /dev/da0
mount /dev
- Original Message - >
Raw read:
/usr/bin/time -h dd of=/dev/null if=/dev/da0 bs=64k count=10
10+0 records in
10+0 records out
655360 bytes transferred in 32.028544 secs (204617482 bytes/sec)
32.02s real 0.02s user 2.73s sys
Out of
--- Steven Hartland <[EMAIL PROTECTED]> wrote:
> - Original Message -
> From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
> > On -current and 5.4 you don't have to make partitions if you
> > intend to use the entire disk (and provided you don't want
> > to boot from it). You can simply:
> >
- Original Message -
From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
On -current and 5.4 you don't have to make partitions if you
intend to use the entire disk (and provided you don't want
to boot from it). You can simply:
newfs /dev/da0
mount /dev/da0 /where_ever
/dev/da0: 1526216.3MB (31
On 5/2/2005 3:43 PM, Steven Hartland wrote:
Nope thats 5.4-STABLE this should be at the very least
260Mb/s as thats what the controller has been measured on
linux at even through the FS.
Um... not quite. That was the number you listed for S/W RAID5. In that
case you're not benchmarking the contr
On 5/2/2005 3:43 PM, Steven Hartland wrote:
- Original Message - From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
To eliminate various parts of the subsystems I've just tested:
dd if=/dev/da0 of=/dev/null bs=64k count=10
Read: 220Mb/s
This is a very interesting number to measure, you'll ne
- Original Message -
From: "Robert Watson" <[EMAIL PROTECTED]>
I'm not sure if we've seen Linux and FreeBSD dmsg output yet, but
if nothing else it would be good to confirm if the drivers on both systems
negotiate the same level of throughput to each drive.
Both drivers ( FreeBSD and Li
In message <[EMAIL PROTECTED]>, Sten Spans w
rites:
>>> What about disk arrays that support RAID3?
>>
>> Would work for me, but most of them are dumbed down when they do RAID3:
>> they have to hard format the disks to 128 byte sector sizes and similar
>> madness in order to support 512 bytes secto
On Mon, 2 May 2005, Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
In message <[EMAIL PROTECTED]>, Allen writes:
I just want to add: This is why I really would love for us to have
a real RAID3 implemetation.
RAID3 is not commercially viable because windows cannot us
On Mon, 2 May 2005, Poul-Henning Kamp wrote:
I'm quite willing to test and optimise things but so far no one has
had any concrete suggestions on that to try.
First thing I heard about this was a few hours ago. (Admittedly, my
email has been in a sucky state last week, so that is probably my own
- Original Message -
From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
0. Does the user know enough about what he is doing.
Im no expert but then again Im not beginner either :)
1. Write performance being nearly 3x that of read performance
2. Read performance only equalling that of single disk
In message <[EMAIL PROTECTED]>, "Steven Hartland"
writes:
>> As such that is a fair end-user benchmark, but unfortunately it
>> doesn't really tell us anything useful for the purpose of this
>> discussion.
>
>Yes but the end-user performance is really the only thing that matters.
>There are two
- Original Message -
From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
Ok from what your saying it sounds like RAID on FreeBSD is useless
apart to create large disks. Now to the damaging facts the results
from my two days worth of testing:
Now, cool down a moment and lets talk about what you
It may make sense to look at measured bandwidth as a
percentage of *guaranteed not to exceed* bandwidth of the
disk setup -- what is the theoretical max bandwidth writing
to a raw partition (and assuming zero cpu overhead, latency,
seek time)? This will help in figuring out how to maximize
end-to-
In message <[EMAIL PROTECTED]>, "Steven Hartland"
writes:
>Ok from what your saying it sounds like RAID on FreeBSD is useless
>apart to create large disks. Now to the damaging facts the results
>from my two days worth of testing:
Now, cool down a moment and lets talk about what you _really_ have
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Petri Helenius writes:
My tests were using RAID10 and just striping. (RAID0 might be the right
name for it)
Same thing applies, and it depends on how the reqeust alignment/size and
stripe alignment/size interacts.
I'm using either
- Original Message -
From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
Interesting stuff so:
1. How to we test if this is happening?
Calculate by hand what the offset of the striped/raid part of the disk
is (ie: take slice+partition stats into account).
How's that done? An explained example w
At 10:38 5/2/2005, Arne "Wörner" wrote:
--- Allen <[EMAIL PROTECTED]> wrote:
> Scenario B, verified read enabled:
> 1. RAID card reads up ALL blocks in the stripe (5 reads).
> 2. RAID card pretends the block requested is on a "degraded"
> drive, and
> calculates it from the other 3 + the XOR stripe
--- Allen <[EMAIL PROTECTED]> wrote:
> Scenario B, verified read enabled:
> 1. RAID card reads up ALL blocks in the stripe (5 reads).
> 2. RAID card pretends the block requested is on a "degraded"
> drive, and
> calculates it from the other 3 + the XOR stripe.
> 3. RAID card reports the value back
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
>> In message <[EMAIL PROTECTED]>, Allen writes:
>>
>> I just want to add: This is why I really would love for us to have
>> a real RAID3 implemetation.
>>
>> RAID3 is not commercially viable because windows cannot use non-512
>> byte sectors
At 10:14 5/2/2005, Arne "Wörner" wrote:
--- Allen <[EMAIL PROTECTED]> wrote:
> Also you should keep in mind, there could simply be some really
> goofy
> controller option enabled, that forces the RAID5 to behave in a
> "degraded"
> state for reads -- forcing it to read up all the other disks in
> t
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Allen writes:
I just want to add: This is why I really would love for us to have
a real RAID3 implemetation.
RAID3 is not commercially viable because windows cannot use non-512
byte sectors.
We can.
RAID3 would scream for us.
What about disk
In message <[EMAIL PROTECTED]>, Allen writes:
I just want to add: This is why I really would love for us to have
a real RAID3 implemetation.
RAID3 is not commercially viable because windows cannot use non-512
byte sectors.
We can.
RAID3 would scream for us.
--
Poul-Henning Kamp | UNIX
--- Allen <[EMAIL PROTECTED]> wrote:
> Also you should keep in mind, there could simply be some really
> goofy
> controller option enabled, that forces the RAID5 to behave in a
> "degraded"
> state for reads -- forcing it to read up all the other disks in
> the stripe
> and calculate the XOR aga
--- Eric Anderson <[EMAIL PROTECTED]> wrote:
> Arne Wörner wrote:
> > --- Robert Watson <[EMAIL PROTECTED]> wrote:
> >>On Sat, 30 Apr 2005, Arne Wörner wrote:
> >>
> >>>3. The man page geom(4) of R5.3 says "The GEOM framework
> >>> provides an infrastructure in which "classes" can per-
> >>> form
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
>If I write a 10GB file to disk (RAID array has 1GB cache, system has 1GB
>memory), then I should definitely see better read performance reading
>that same file back to /dev/null than writing it, right?
Nope, quite the contrary: you will get
>Interesting stuff so:
>1. How to we test if this is happening?
Calculate by hand what the offset of the striped/raid part of the disk
is (ie: take slice+partition stats into account).
>2. How do we prevent it from happening?
Make sure that the first sector of a partition/slice is always the f
At 09:28 5/2/2005, Steven Hartland wrote:
- Original Message - From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
Wouldn't this be a problem for writes then too?
I presume you would only compare read to write performance on a RAID5
device which has battery backed cache.
Without a battery backed
In message <[EMAIL PROTECTED]>, Petri Helenius writes:
>>
>My tests were using RAID10 and just striping. (RAID0 might be the right
>name for it)
Same thing applies, and it depends on how the reqeust alignment/size and
stripe alignment/size interacts.
--
Poul-Henning Kamp | UNIX since Zil
- Original Message -
From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
Wouldn't this be a problem for writes then too?
I presume you would only compare read to write performance on a RAID5
device which has battery backed cache.
Without a battery backed cache (or pretending to have one) RAID5
w
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
Don't mean to be terse here, but I'm talking about the same test done an
two different RAID5 configurations, with different disks, and not
--- Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
> Uhm, if you are using RAID5 and your requests are not aligned
> and sized after the RAID5 you should *expect* read performance
> to be poor.
>
Wouldn't that affect both (read and write) in the same way?
> If the disk has bad sectors or other hardw
- Original Message -
From: "Poul-Henning Kamp" <[EMAIL PROTECTED]>
Don't mean to be terse here, but I'm talking about the same test done an
two different RAID5 configurations, with different disks, and not just
me - other users in this very thread see the same issue..
Uhm, if you are usi
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
I'll be honest here, I don't care much if the speed difference between
4.X and 5.X is measureable, or whatever. What I find is a little
telling of an issue somewhere, is that READS are slower than WRITES!
This is
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
>Poul-Henning Kamp wrote:
>> In message <[EMAIL PROTECTED]>, Eric Anderson writes:
>>
>>
>>>Don't mean to be terse here, but I'm talking about the same test done an
>>>two different RAID5 configurations, with different disks, and not just
>>
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
Don't mean to be terse here, but I'm talking about the same test done an
two different RAID5 configurations, with different disks, and not just
me - other users in this very thread see the same issue..
Uhm, if
kama wrote:
dev-null is not the issue... my own written testprogram that only read up
data to a buffer in memory showed the same results as doing a dd to
dev-null.
And dd from zero to null does:
114541264896 bytes transferred in 27.716454 secs (4132608911 bytes/sec)
Pete
/Bjorn
__
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
Don't mean to be terse here, but I'm talking about the same test done an
two different RAID5 configurations, with different disks, and not just
me - other users in this very thread see the same issue..
Uhm, if you ar
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
>Don't mean to be terse here, but I'm talking about the same test done an
>two different RAID5 configurations, with different disks, and not just
>me - other users in this very thread see the same issue..
Uhm, if you are using RAID5 and your
Poul-Henning Kamp wrote:
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
I'll be honest here, I don't care much if the speed difference between
4.X and 5.X is measureable, or whatever. What I find is a little
telling of an issue somewhere, is that READS are slower than WRITES!
This is tot
Its highly unlikely that the 4 people on different hardware that have tested
this all have disks with bad sectors.
I've just finished doing a full battery of tests across:
FreeBSD: 4.11-RELEASE, 5.4-STABLE, 6.0-CURRENT, Suse 9.1
I'll post the results soon but suffice to say the results for FreeBSD
In message <[EMAIL PROTECTED]>, Eric Anderson writes:
>
>I'll be honest here, I don't care much if the speed difference between
>4.X and 5.X is measureable, or whatever. What I find is a little
>telling of an issue somewhere, is that READS are slower than WRITES!
>This is totally bogus to me -
On Mon, 2 May 2005, Eric Anderson wrote:
> I'll be honest here, I don't care much if the speed difference between
> 4.X and 5.X is measureable, or whatever. What I find is a little
> telling of an issue somewhere, is that READS are slower than WRITES!
> This is totally bogus to me - dd'ing a fi
Arne Wörner wrote:
--- Robert Watson <[EMAIL PROTECTED]> wrote:
On Sat, 30 Apr 2005, Arne WXrner wrote:
3. The man page geom(4) of R5.3 says "The GEOM framework
provides an infrastructure in which "classes" can per-
form transformations on disk I/O requests on their path
from the upper kernel to
--- Robert Watson <[EMAIL PROTECTED]> wrote:
> On Sat, 30 Apr 2005, Arne WXrner wrote:
> > 3. The man page geom(4) of R5.3 says "The GEOM framework
> > provides an infrastructure in which "classes" can per-
> > form transformations on disk I/O requests on their path
> > from the upper kernel
On Sat, 30 Apr 2005, Arne WXrner wrote:
3. The man page geom(4) of R5.3 says "The GEOM framework
provides an infrastructure in which "classes" can per-
form transformations on disk I/O requests on their path
from the upper kernel to the device drivers and back.
Could it be, that geom slows so
--- Arne Wörner <[EMAIL PROTECTED]> wrote:
> --- Petri Helenius <[EMAIL PROTECTED]> wrote:
> > Eric Anderson wrote:
> > I'm seeing similar sequential performance on RELENG_5_3
> > and RELENG_5_4 on dual-Xeons using 3ware controllers so
> > it does not seem to be a driver issue [...]
> >
> Why?
>
I
64 matches
Mail list logo