At 10:04 AM 04/05/2005, Steven Hartland wrote:
Did you also try the sys/param.h change that helped here.

I have not yet but will soon. The tweaking certainly does make a difference in various numbers. I ran some extensive iozone tests overnight too see how those numbers are effected by these tweaks. But in terms of raw reads, here is the range. This is the same hardware, just varying how the disk is mounted, the vfs readmax and the newfs params


4194304000 bytes transferred in 65.338981 secs (64192982 bytes/sec)
4194304000 bytes transferred in 53.246965 secs (78770762 bytes/sec)
4194304000 bytes transferred in 62.046088 secs (67599814 bytes/sec)
4194304000 bytes transferred in 55.313732 secs (75827536 bytes/sec)
4194304000 bytes transferred in 59.167997 secs (70888051 bytes/sec)
4194304000 bytes transferred in 56.293913 secs (74507238 bytes/sec)
4194304000 bytes transferred in 53.891288 secs (77828980 bytes/sec)
4194304000 bytes transferred in 58.828609 secs (71297011 bytes/sec)
4194304000 bytes transferred in 54.110452 secs (77513749 bytes/sec)
4194304000 bytes transferred in 54.480602 secs (76987108 bytes/sec)
4194304000 bytes transferred in 54.604255 secs (76812769 bytes/sec)
4194304000 bytes transferred in 53.150221 secs (78914141 bytes/sec)
4194304000 bytes transferred in 63.662145 secs (65883799 bytes/sec)
4194304000 bytes transferred in 54.131878 secs (77483068 bytes/sec)
4194304000 bytes transferred in 59.093488 secs (70977432 bytes/sec)
4194304000 bytes transferred in 51.723489 secs (81090895 bytes/sec)
4194304000 bytes transferred in 48.060447 secs (87271431 bytes/sec)
4194304000 bytes transferred in 58.114882 secs (72172632 bytes/sec)
4194304000 bytes transferred in 36.197407 secs (115873051 bytes/sec)
4194304000 bytes transferred in 38.453472 secs (109074780 bytes/sec)
4194304000 bytes transferred in 39.662853 secs (105748923 bytes/sec)
4194304000 bytes transferred in 35.170596 secs (119255983 bytes/sec)
4194304000 bytes transferred in 35.173053 secs (119247652 bytes/sec)
4194304000 bytes transferred in 35.241742 secs (119015230 bytes/sec)

I should get the iozone stuff summarized over the weekend.

        ---Mike


Also when testing on FS I found bs=1024k to degrade performance
try with 64k.
Is this a raid volume? If so on my setup anything other that a 16k stripe
and performance went out the window.

For the 'time' its easier to understand if u use:
/usr/bin/time -h

Last but not least. I found some very strange behaviour late last night.
If I created the raid set but didnt power down after doing so the
results where a lot lower than when I did. Dont ask me why I have
no idea, only noticed as I retested a result after I installed a new
fan in the machine and was totally a los

My initial starting results where:
Write: 140Mb/s
Read: 49Mb/s

My current values are:
Write: 140Mb/s
Read: 195Mb/s

Changes made:
1. 16k RAID5 stripe not 64k ( default )
2. vfs.read_max=16
3. MAXPHYS = 256 ( was 128 )
4. newfs /dev/da0 ( was a basic install with multi partitions )

I'm currently seeing some VERY strange behaviour which could
be the RAID controller negotiating different PCI-X speeds.

Anyone know how to check the state of a PCI-X card?

Power up, login:
dd if=/dev/da0 of=/dev/null bs=64k count=100000
100000+0 records in
100000+0 records out
6553600000 bytes transferred in 42.997295 secs (152418891 bytes/sec)
mount /dev/da0 /mnt
dd if=/mnt/testfile of=/dev/null bs=64k count=100000
100000+0 records in
100000+0 records out
6553600000 bytes transferred in 48.757091 secs (134413270 bytes/sec)
shutdown -p now

Power up, login:
dd if=/dev/da0 of=/dev/null bs=64k count=100000
100000+0 records in
100000+0 records out
6553600000 bytes transferred in 28.365671 secs (231039838 bytes/sec)
mount /dev/da0 /mnt
dd if=/mnt/testfile of=/dev/null bs=64k count=100000
100000+0 records in
100000+0 records out
6553600000 bytes transferred in 32.012170 secs (204722143 bytes/sec)

   Steve
----- Original Message ----- From: "Mike Tancsa" <[EMAIL PROTECTED]>
OK, some further tests, trying to control for this. I am not sure what values to actually fiddle with via bsdlable as this is the entire disk so I will just vary mounting da0 vs da0s1d. I also cvsup'd to current as of yesterday (FreeBSD nfs.sentex.ca 6.0-CURRENT FreeBSD 6.0-CURRENT #0: Mon May 2 15:03:53 EDT 2005). I will also try with Scott's changes to the driver which went in after.
Also, the newfs params does seem to make a very big difference and vfs.read_max as well.
....


================================================
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it.
In the event of misdirection, illegible or incomplete transmission please telephone (023) 8024 3137
or return the E.mail to [EMAIL PROTECTED]


_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

_______________________________________________ freebsd-performance@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-performance To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to