There seems to be a fundamental problem with writing to a level 5 RAIDframe set,
at least to the block device.
I've created five small wedges in the spared-out region of my 3TB SAS discs.
In case it matters, they are connected to an mpt(4) controller.
Then I configured a 5-component, 32-SpSU,
On Fri, Nov 02, 2012 at 06:02:01PM +0100, Edgar Fu? wrote:
Writing to that RAID's block device (raid2d) in 64k blocks gives me a dazzling
troughput of 2.4MB/s and a dd mostly waiting in vnode.
Writing to the block device from userspace is not a good idea. How is
performance through the
How is performance through the filesystem?
Incredebily poor, that's why I'm performing this test. I don't have a file
system on that test RAID, but on the larger one (with 128 SpSU), I can only
write 350MB/s to a file, and I have serious WAPL performance problems.
On Fri, Nov 02, 2012 at 10:36:13PM +0100, Edgar Fu? wrote:
How is performance through the filesystem?
Incredebily poor, that's why I'm performing this test. I don't have a file
system on that test RAID, but on the larger one (with 128 SpSU), I can only
write 350MB/s to a file, and I have
350MB/sec? That doesn't sound poor to me.
Sorry, 350kB/s. That's so incredibily poor that my fingers seem to have refused
to type it.
Unfortunately, writing to the block device with 'dd' from userspace is
not really a useful test of anything.
So what should I test instead to track down the
On Fri, Nov 02, 2012 at 11:06:33PM +0100, Edgar Fu? wrote:
What does file write performance look like with WAPBL turned off?
Well, I havent' tried dd'ing to a file with WAPL turned off, but the figures
for creating a lot of files can be found further up in this thread:
On the 8k fsbsize
I think you're seeing pathological cache flush behavior.
That has been suggested (by joerg@) before.
I've tried vfs.wapbl.flush_disk_cache=0 and that doesn't change anything.
Does this controller have a nonvolatile cache?
No, it doesn't. It's a plain LSI 3081E-R.
Hello. Just for your information,and because I was curious, I tried
the following test on several raid5 systems around here.
dd if=/dev/zero of=test bs=64k count=10
NetBSD-4.x with softdep enabled came in with a write speed of about
9.8mbytes/sec.
NetBSD-5.1 on FFSv2 without softdep
I tried the following test on several raid5 systems around here.
Did you also try creating and deleting a large number of files?
hello. I'm not sure exactly what tests you're asking about, but here
are the results from the enclosed shell script.
NetBSD-3.0 (raid1, softdep, ffsV1 atop mpt(4) sd disks)
%time sh /var/tmp/filespeed.sh
Starting run -- creating and destroying 5000 files...
5.8u 16.5s 0:18.02 124.1% 0+0k
Hello. Just to followup on myself, here are the numbers for the
previous script with:
NetBSD-6 (raid1, ffsV1, no softdep and no log, atop wd(4) disks)
%time sh /var/tmp/filespeed.sh
Starting run -- creating and destroying 5000 files...
35.3u 73.3s 2:59.00 60.7% 0+0k 30+20308io 0pf+0w
11 matches
Mail list logo