Re: What is rx_processing_limit sysctl for Intel igb NIC driver?

2012-09-06 Thread Nikolay Denev

On Sep 4, 2012, at 7:13 PM, John Baldwin j...@freebsd.org wrote:

 On Sunday, September 02, 2012 10:41:15 pm Andy Young wrote:
 I am tuning our server that has an Intel 82576 gigabit NIC using the igb
 driver. I see a lot of posts on the net where people bump the
 rx_processing_limit sysctl from the default value of 100 to 4096. Can
 anyone tell me what this is intended to do?
 
 If you have multiple devices sharing an IRQ with igb (and thus are not using 
 MSI or MSI-X), it forces the driver to more-or-less cooperatively schedule 
 with the other interrupts on the same IRQ.  However, since igb uses a fast 
 interrupt handler and a task on a dedicated taskqueue in the non-MSI case 
 now, 
 I think it doesn't even do that.  It should probably be set to -1 (meaning
 unlimited) in just about all cases now.
 
 -- 
 John Baldwin
 ___
 freebsd-hardware@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-hardware
 To unsubscribe, send any mail to freebsd-hardware-unsubscr...@freebsd.org

And setting it to -1 gave a nice performance improvement in some tests that I 
did recently.
AFAIR only after setting this to -1 I was able to reach 10gig speed using iperf 
on two directly
connected machines with ix(4) 
82599___
freebsd-hardware@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to freebsd-hardware-unsubscr...@freebsd.org


Re: Support for Fusion IO drives?

2012-08-28 Thread Nikolay Denev
Fron the zpool man page:

 By default, the intent log is allocated from blocks within the main pool.
 However, it might be possible to get better performance using separate
 intent log devices such as NVRAM or a dedicated disk.

I was also contemplating the idea of a fast SSD on PCIe as a ZIL and L2ARC.
And given the fact that the SSD will not suffer from the different types and 
locations of IO requests,
maybe it makes sense to go with a big SSD, and partition it for a small ZIL 
partition,
and the rest for L2ARC. Anyone tried that?

Regards,
Nikolay

On Aug 29, 2012, at 2:47 AM, Andrew Young ayo...@mosaicarchive.com wrote:

 Thanks for the great feedback Josh! The optimum size for an ssd zil device 
 was still an open question for us. I'm really glad to hear that they don't 
 need to be that big.
 
 What does zfs do with the zil if there is no dedicated zil device? Our 
 servers consist of a small sata drive that holds the OS and a boatload of 
 larger drives on a sas bus. What I'm wondering is if I simply replace the OS 
 disk with an ssd will I get the same performance boost as if I added a 
 dedicated ssd zil? 
 
 Thanks!
 Andy
 
 On Aug 28, 2012, at 7:07 PM, Josh Paetzel j...@tcbug.org wrote:
 
  Original Message 
 Subject: Support for Fusion IO drives?
 Date: Tue, 28 Aug 2012 16:46:00 -0400
 From: Andy Young ayo...@mosaicarchive.com
 To: freebsd-hardware@freebsd.org
 
 
 We are investigating adding SSDs as ZIL devices to boost our ZFS write
 performance. I read an article a while ago about iX Systems teaming up
 with
 Fusion IO to integrate their hardware with FreeBSD. Does anyone know
 anything about supported drivers for Fusion IO's iodrives?
 
 Thanks!
 
 Andy
 ___
 freebsd-hardware@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-hardware
 To unsubscribe, send any mail to
 freebsd-hardware-unsubscr...@freebsd.org
 
 
 
 
 
 I'll put on my iXsystems hat here, as well as my fast storage, ZFS and
 Fusion-I/O hat.
 
 The ZFS filesystem supports dedicated ZIL devices, which can accelerate
 certain types of write requests, notably related to fsync.  The VMWare
 NFS client issues a sync with every write, and most databases do as
 well.  In those types of environments having a fast dedicated ZIL device
 is almost essential.  In other environments the benefits of a dedicated
 ZIL range from non-existent to substantial.
 
 A good dedicated ZIL device is all about latency.  It doesn't need to be
 large, in fact it will only ever handle 10 seconds of writes, so 10x
 network bandwidth is worst case. (In most environments this means 20GB
 is larger than needed).
 
 Fusion-I/O cards are far too large to be cost effective ZIL devices.
 Even though they do rock at I/O latency, the really fast ones are also
 fairly large, so the $/GB on them isn't so attractive.  There are better
 options for ZIL devices.
 
 Another consideration is the Fusion-I/O driver is fairly memory hungry,
 which competes with memory ZFS wants to use for read caching.
 
 Now as an L2ARC device, that's a whole different can of worms.
 
 Command line used: iozone -r 4k -s 96g -i 0 -i 1 -i 2 -t 8
 Parent sees throughput for  8 readers   = 1712399.95 KB/sec
 L2 ARC Breakdown:   197.45m
 Hit Ratio:  98.61%  194.71m
 L2 ARC Size: (Adaptive) 771.13  GiB
 ARC Efficiency: 683.40m
 Actual Hit Ratio:   71.09%  485.82m
 
 ~ 800GB test data, all served from cache.
 
 If you are considering Fusion-I/O, the FreeBSD driver is generally not
 released to the general public by Fusion-I/O, but can be obtained from
 various partners. (I believe iXsystems is the only FreeBSD friendly
 fusion-i/o partner but could be wrong about that)
 
 
 -- 
 Thanks,
 
 Josh Paetzel
 
 ___
 freebsd-hardware@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-hardware
 To unsubscribe, send any mail to freebsd-hardware-unsubscr...@freebsd.org

___
freebsd-hardware@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to freebsd-hardware-unsubscr...@freebsd.org


Re: Areca vs. ZFS performance testing.

2009-01-08 Thread Nikolay Denev

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On 8 Jan, 2009, at 02:33 , Danny Carroll wrote:


I'd like to post some results of what I have found with my tests.
I did a few different types of tests.  Basically a set of 5-disk tests
and a set of 12-disk tests.

I did this because I only had 5 ports available on my onboard  
controller

and I wanted to see how the areca compared to that.  I also wanted to
see comparisons between JBOD, Passthru and hardware raid5.

I have not tested raid6 or raidz2.

You can see the results here:
http://www.dannysplace.net/quickweb/filesystem%20tests.htm

An explanation of each of the tests:
ICH9_ZFS5 disk zfs raidz test with onboard SATA
ports.
ARECAJBOD_ZFS   5 disk zfs raidz test with Areca SATA
ports configured in JBOD mode.
ARECAJBOD_ZFS_NoWriteCache  5 disk zfs raidz test with Areca SATA   

ports configured in JBOD mode and with
disk caches disabled.
ARECARAID   5 disk zfs single-disk test with Areca
raid5 array.
ARECAPASSTHRU   5 disk zfs raidz test with Areca SATA   
ports
configured in Passthru mode.  This
means that the onboard areca cache is
active.
ARECARAID-UFS2  5 disk ufs2 single-disk test with Areca
raid5 array.
ARECARAID-BIG   12 disk zfs single-disk test with Areca
raid5 array.
ARECAPASSTHRU_1212 disk zfs raidz test with Areca SATA  
ports
configured in Passthru mode.  This
means that the onboard areca cache is
active.


I'll probably be opting for the ARECAPASSTHRU_12 configuration.
Mainly

because I do not need amazing read speeds (network port would be
saturated anyway) and I think that the raidz implementation would be
more fault tolerant.  By that I mean if you have a disk read error
during a rebuild then as I understand it, raidz will write off that
block (and hopefully tell me about dead files) but continue with the
rest of the rebuild.

This is something I'd love to test for real, just to see what happens.
But I am not sure how I could do that.  Perhaps removing one drive,  
then
a few random writes to a remaining disk (or two) and seeing how it  
goes

with a rebuild.

Something else worth mentioning.   When I converted from JBOD to
passthrough, I was able to re-import the disks without any problems.
This must mean that the areca passthrough option does not alter the  
disk

much, perhaps not at all.

After a 21 hour rebuild I have to say I am not that keen to do more of
these tests, but if there is something someone wants to see, then I'll
definitely consider it.

One thing I am at a loss to understand is why turning off the disk
caches when testing the JBOD performance produced almost identical  
(very

slightly better) results.  Perhaps it was a case of the ZFS internal
cache making the disks cache redundant?  Comparing to the ARECA
passthrough (where the areca cache is used) shows again, similar  
results.


-D
___
freebsd...@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-fs
To unsubscribe, send any mail to freebsd-fs-unsubscr...@freebsd.org



There is a big difference betweeen hardware and ZFS raidz with 12 disk  
on the get_block test,
maybe it would be interesting to rerun this test with zfs prefetch  
disabled?


- --
Regards,
Nikolay Denev




-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.9 (Darwin)

iEYEARECAAYFAkllxT8ACgkQHNAJ/fLbfrnHnwCeJ8nSjBY6fc0Lvu2+fSN5E4HI
zb0Ani2ZFLdxYCWYBuCnoo+D244O2lg5
=EKgi
-END PGP SIGNATURE-
___
freebsd-hardware@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to freebsd-hardware-unsubscr...@freebsd.org


Re: Areca vs. ZFS performance testing.

2008-11-13 Thread Nikolay Denev

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On 13 Nov, 2008, at 15:59 , Danny Carroll wrote:
[snip]


It is entirely possible.  I do not know however if the Areca cache  
works

just for Raid or also in JBOD mode.



I think some RAID controllers do not use the cache when you export the  
disks
as pass-thru/jbod, but on some controllers you can workaround this by  
making

every disk a RAID0(stripe) array with only one disk.
Dunno if that would work on the areca...

[snip]

- --
Regards,
Nikolay Denev




-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.9 (Darwin)

iEYEARECAAYFAkkcQlsACgkQHNAJ/fLbfrkTkgCgo2NupY2Qe3TglJpoIIwne4uH
VRwAnRl9p44NFxyWf9zhjrZOOImtiBAs
=4Djt
-END PGP SIGNATURE-
___
freebsd-hardware@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hardware
To unsubscribe, send any mail to [EMAIL PROTECTED]