Re: ZFS - abysmal performance with samba since upgrade to 8.2-RELEASE

2011-03-21 Thread Sergey Gavrilov
Yes, I know. But I have a lot of memory and I want it to be used if it
improves performance. I tried to test whole system not just fs. And I
surprised in why vfs.zfs.prefetch_disable="1" affects so much.

2011/3/18 Paul Mather 

>
> On Mar 18, 2011, at 5:53 AM, Sergey Gavrilov wrote:
>
> > vfs.zfs.prefetch_disable="1" causes much slow read on sequence data in my
> > case.
> >
> > with vfs.zfs.prefetch_disable="1":
> >
> > dd if=/pool1/test/idisk1 of=/dev/null bs=1m count=500
> > 524288000 bytes transferred in 18.347177 secs (28575949 bytes/sec)
> >
> > with vfs.zfs.prefetch_disable="0":
> >
> > dd if=/pool1/test/idisk1 of=/dev/null bs=1m count=500
> > 524288000 bytes transferred in 3.331806 secs (157358504 bytes/sec)
> >
> > after few seconds:
> > dd if=/pool1/test/idisk1 of=/dev/null bs=1m count=500
> > 524288000 bytes transferred in 0.107767 secs (4865009592 bytes/sec)
>
> The last dd performance figures are undoubtedly reflecting a read from data
> cached in RAM.  You should do your test on a file that is bigger than your
> RAM size, or at least bigger than the amount of RAM you are dedicating to
> ARC.
>
> Other than that, ZFS prefetching *is* supposed to speed up sequential
> accesses, so it's no surprise you notice a speedup in such cases. :-)
>
> Cheers,
>
> Paul.
>
>


-- 
Best regards,
Sergey Gavrilov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - abysmal performance with samba since upgrade to 8.2-RELEASE

2011-03-18 Thread Sergey Gavrilov
vfs.zfs.prefetch_disable="1" causes much slow read on sequence data in my
case.

with vfs.zfs.prefetch_disable="1":

dd if=/pool1/test/idisk1 of=/dev/null bs=1m count=500
524288000 bytes transferred in 18.347177 secs (28575949 bytes/sec)

with vfs.zfs.prefetch_disable="0":

dd if=/pool1/test/idisk1 of=/dev/null bs=1m count=500
524288000 bytes transferred in 3.331806 secs (157358504 bytes/sec)

after few seconds:
dd if=/pool1/test/idisk1 of=/dev/null bs=1m count=500
524288000 bytes transferred in 0.107767 secs (4865009592 bytes/sec)

My system:

FreeBSD 8.2-RELEASE

hw.machine: amd64
hw.model: Intel(R) Xeon(R) CPU   E5520  @ 2.27GHz
hw.ncpu: 8
hw.physmem: 17155719168

3ware 9690sa controller
Disks: ST32000445SS

pool: pool1
NAMESTATE READ WRITE CKSUM
pool1   ONLINE   0 0 0
  raidz2ONLINE   0 0 0
da12ONLINE   0 0 0
da11ONLINE   0 0 0
da10ONLINE   0 0 0
da9 ONLINE   0 0 0
da16ONLINE   0 0 0
da15ONLINE   0 0 0
da14ONLINE   0 0 0
da13ONLINE   0 0 0

-- 
Best regards,
Sergey Gavrilov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - abysmal performance with samba since upgrade to 8.2-RELEASE

2011-02-28 Thread Bartosz Stec

W dniu 2011-02-24 08:55, Jeremy Chadwick pisze:

(...snip...)
Samba
===
Rebuild the port (ports/net/samba35) with AIO_SUPPORT enabled.  To use
AIO you will need to load the aio.ko kernel module (kldload aio) first.

Relevant smb.conf tunings:

   [global]
   socket options = TCP_NODELAY SO_SNDBUF=131072 SO_RCVBUF=131072
   use sendfile = no
   min receivefile size = 16384
   aio read size = 16384
   aio write size = 16384
   aio write behind = yes



ZFS pools
===
   pool: backups
  state: ONLINE
  scrub: none requested
config:

 NAMESTATE READ WRITE CKSUM
 backups ONLINE   0 0 0
   ada2  ONLINE   0 0 0

errors: No known data errors

   pool: data
  state: ONLINE
  scrub: none requested
config:

 NAMESTATE READ WRITE CKSUM
 dataONLINE   0 0 0
   ada1  ONLINE   0 0 0

errors: No known data errors



ZFS tunings
===
Your tunings here are "wild" (meaning all over the place).  Your use
of vfs.zfs.txg.synctime="1" is probably hurting you quite badly, in
addition to your choice to enable prefetching (every ZFS FreeBSD system
I've used has benefit tremendously from having prefetching disabled,
even on systems with 8GB RAM and more).  You do not need to specify
vm.kmem_size_max, so please remove that.  Keeping vm.kmem_size is fine.
Also get rid of your vdev tunings, I'm not sure why you have those.

My relevant /boot/loader.conf tunings for 8.2-RELEASE (note to readers:
the version of FreeBSD you're running, and build date, matters greatly
here so do not just blindly apply these without thinking first):

   # We use Samba built with AIO support; we need this module!
   aio_load="yes"

   # Increase vm.kmem_size to allow for ZFS ARC to utilise more memory.
   vm.kmem_size="8192M"
   vfs.zfs.arc_max="6144M"

   # Disable ZFS prefetching
   # http://southbrain.com/south/2008/04/the-nightmare-comes-slowly-zfs.html
   # Increases overall speed of ZFS, but when disk flushing/writes occur,
   # system is less responsive (due to extreme disk I/O).
   # NOTE: Systems with 8GB of RAM or more have prefetch enabled by
   # default.
   vfs.zfs.prefetch_disable="1"

   # Decrease ZFS txg timeout value from 30 (default) to 5 seconds.  This
   # should increase throughput and decrease the "bursty" stalls that
   # happen during immense I/O with ZFS.
   # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007343.html
   # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007355.html
   vfs.zfs.txg.timeout="5"



sysctl tunings
===
Please note that the below kern.maxvnodes tuning is based on my system
usage, and yours may vary, so you can remove or comment out this option
if you wish.  The same goes for vfs.ufs.dirhash_maxmem.  As for
vfs.zfs.txg.write_limit_override, I strongly suggest you keep this
commented out for starters; it effectively "rate limits" ZFS I/O, and
this smooths out overall performance (otherwise I was seeing what
appeared to be incredible network transfer speed, then the system would
churn hard for quite some time on physical I/O, then fast network speed,
physical I/O, etc... very "bursty", which I didn't want).

   # Increase send/receive buffer maximums from 256KB to 16MB.
   # FreeBSD 7.x and later will auto-tune the size, but only up to the max.
   net.inet.tcp.sendbuf_max=16777216
   net.inet.tcp.recvbuf_max=16777216

   # Double send/receive TCP datagram memory allocation.  This defines the
   # amount of memory taken up by default *per socket*.
   net.inet.tcp.sendspace=65536
   net.inet.tcp.recvspace=131072

   # dirhash_maxmem defaults to 2097152 (2048KB).  dirhash_mem has reached
   # this limit a few times, so we should increase dirhash_maxmem to
   # something like 16MB (16384*1024).
   vfs.ufs.dirhash_maxmem=16777216

   #
   # ZFS tuning parameters
   # NOTE: Be sure to see /boot/loader.conf for additional tunings
   #

   # Increase number of vnodes; we've seen vfs.numvnodes reach 115,000
   # at times.  Default max is a little over 200,000.  Playing it safe...
   kern.maxvnodes=25

   # Set TXG write limit to a lower threshold.  This helps "level out"
   # the throughput rate (see "zpool iostat").  A value of 256MB works well
   # for systems with 4GB of RAM, while 1GB works well for us w/ 8GB on
   # disks which have 64MB cache.
   vfs.zfs.txg.write_limit_override=1073741824



Good luck.


Jeremy, you're just invaluable! :)
In short - I applied tips suggested above (only difference was 
vfs.zfs.txg.write_limit_override set to 128MB, and sendfile, which I 
still have enabled) and it's first time _ever_ I see samba performing so 
fast on FreeBSD (on 100Mb link)!


long story:
I'm using old, crappy, low memory desktop PC as home router/test 
server/(very little) storage:


   FreeBSD 9.0-CURRENT #2 r219090: Mon Feb 28 03:06:13 CET 2011
   CPU: mobile AMD At

Re: ZFS - abysmal performance with samba since upgrade to 8.2-RELEASE

2011-02-27 Thread Marc UBM Bocklet
On Wed, 23 Feb 2011 23:55:17 -0800
Jeremy Chadwick  wrote:

> On Thu, Feb 24, 2011 at 08:30:17AM +0100, Damien Fleuriot wrote:
> > Hello list,
> > 
> > I've recently upgraded my home box from 8.2-PRE to 8.2-RELEASE and
> > since then I've been experiencing *abysmal* performance with samba.
> > 
> > We're talking transfer rates of say 50kbytes/s here, and I'm the
> > only client on the box.

[tuning tips]

Wow, thanks a lot! :-)

Those tips have finally stopped "chopiness" with our fileserver as well
as giving consistent throughput with samba.

Bye
Marc


-- 
Marc "UBM" Bocklet 
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - abysmal performance with samba since upgrade to 8.2-RELEASE

2011-02-27 Thread Damien Fleuriot
On 24 February 2011 08:55, Jeremy Chadwick  wrote:
> On Thu, Feb 24, 2011 at 08:30:17AM +0100, Damien Fleuriot wrote:
>> Hello list,
>>
>> I've recently upgraded my home box from 8.2-PRE to 8.2-RELEASE and since
>> then I've been experiencing *abysmal* performance with samba.
>>
>> We're talking transfer rates of say 50kbytes/s here, and I'm the only
>> client on the box.
>
> I have a similar system with significantly less disks (two pools, one
> disk each; yes, no redundancy).  The system can push, via SMB/CIFS
> across the network about 65-70MBytes/sec, and 80-90MByte/sec via FTP.
> I'll share with you my tunings for Samba, ZFS, and the system.  I spent
> quite some time messing with different values in Samba and FreeBSD to
> find out what got me the "best" performance without destroying the
> system horribly.
>
> Please note the amount of memory matters greatly here, so don't go
> blindly setting these if your system has some absurdly small amount of
> physical RAM installed.
>
> Before getting into what my system has, I also want to make clear that
> there have been cases in the past where people were seeing abysmal
> performance from ZFS, only to find out it was a *single disk* in their
> pool which was causing all of the problems (meaning a single disk was
> performing horribly, impacting everything).  I can try to find the
> mailing list post, but I believe the user offlined the disk (and later
> replaced it) and everything was fast again.  Just a FYI.
>
>
> System specifications
> ===
> * Case - Supermicro SC733T-645B
> *   MB - Supermicro X7SBA
> *  CPU - Intel Core 2 Duo E8400
> *  RAM - CT2KIT25672AA800, 4GB ECC
> *  RAM - CT2KIT25672AA80E, 4GB ECC
> * Disk - Intel X25-V SSD (ada0, boot)
> * Disk - WD1002FAEX (ada1, ZFS "data" pool)
> * Disk - WD2001FASS (ada2, ZFS "backups" pool)
>
>
>
> Samba
> ===
> Rebuild the port (ports/net/samba35) with AIO_SUPPORT enabled.  To use
> AIO you will need to load the aio.ko kernel module (kldload aio) first.
>
> Relevant smb.conf tunings:
>
>  [global]
>  socket options = TCP_NODELAY SO_SNDBUF=131072 SO_RCVBUF=131072
>  use sendfile = no
>  min receivefile size = 16384
>  aio read size = 16384
>  aio write size = 16384
>  aio write behind = yes
>
>
>
> ZFS pools
> ===
>  pool: backups
>  state: ONLINE
>  scrub: none requested
> config:
>
>        NAME        STATE     READ WRITE CKSUM
>        backups     ONLINE       0     0     0
>          ada2      ONLINE       0     0     0
>
> errors: No known data errors
>
>  pool: data
>  state: ONLINE
>  scrub: none requested
> config:
>
>        NAME        STATE     READ WRITE CKSUM
>        data        ONLINE       0     0     0
>          ada1      ONLINE       0     0     0
>
> errors: No known data errors
>
>
>
> ZFS tunings
> ===
> Your tunings here are "wild" (meaning all over the place).  Your use
> of vfs.zfs.txg.synctime="1" is probably hurting you quite badly, in
> addition to your choice to enable prefetching (every ZFS FreeBSD system
> I've used has benefit tremendously from having prefetching disabled,
> even on systems with 8GB RAM and more).  You do not need to specify
> vm.kmem_size_max, so please remove that.  Keeping vm.kmem_size is fine.
> Also get rid of your vdev tunings, I'm not sure why you have those.
>
> My relevant /boot/loader.conf tunings for 8.2-RELEASE (note to readers:
> the version of FreeBSD you're running, and build date, matters greatly
> here so do not just blindly apply these without thinking first):
>
>  # We use Samba built with AIO support; we need this module!
>  aio_load="yes"
>
>  # Increase vm.kmem_size to allow for ZFS ARC to utilise more memory.
>  vm.kmem_size="8192M"
>  vfs.zfs.arc_max="6144M"
>
>  # Disable ZFS prefetching
>  # http://southbrain.com/south/2008/04/the-nightmare-comes-slowly-zfs.html
>  # Increases overall speed of ZFS, but when disk flushing/writes occur,
>  # system is less responsive (due to extreme disk I/O).
>  # NOTE: Systems with 8GB of RAM or more have prefetch enabled by
>  # default.
>  vfs.zfs.prefetch_disable="1"
>
>  # Decrease ZFS txg timeout value from 30 (default) to 5 seconds.  This
>  # should increase throughput and decrease the "bursty" stalls that
>  # happen during immense I/O with ZFS.
>  # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007343.html
>  # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007355.html
>  vfs.zfs.txg.timeout="5"
>
>
>
> sysctl tunings
> ===
> Please note that the below kern.maxvnodes tuning is based on my system
> usage, and yours may vary, so you can remove or comment out this option
> if you wish.  The same goes for vfs.ufs.dirhash_maxmem.  As for
> vfs.zfs.txg.write_limit_override, I strongly suggest you keep this
> commented out for starters; it effectively "rate limits" ZFS I/O, and
> this smooths out overall performance (otherwise I was seeing what
> appeared to be incredibl

Re: ZFS - abysmal performance with samba since upgrade to 8.2-RELEASE

2011-02-24 Thread Jeremy Chadwick
On Thu, Feb 24, 2011 at 07:23:54PM +0100, Christer Solskogen wrote:
> On Thu, Feb 24, 2011 at 8:55 AM, Jeremy Chadwick
>  wrote:
> 
> >
> >  # Set TXG write limit to a lower threshold.  This helps "level out"
> >  # the throughput rate (see "zpool iostat").  A value of 256MB works well
> >  # for systems with 4GB of RAM, while 1GB works well for us w/ 8GB on
> >  # disks which have 64MB cache.
> >  vfs.zfs.txg.write_limit_override=1073741824
> >
> >
> 
> Sorry if you have said this before, but could you elaborate a bit
> about this number? For instance, how much does the cache on the disk
> has to say.
> In my case: 3x1.5TB raidz with WD15EADS-00R6B0 which has 32MB cache
> and 12GB memory. What would you recommend and why.

There's no real way to provide an in-depth analysis of this number; that
is to say, hard disk parameters (RPM, cache, and overall performance of
the drive (highly dependent upon on-disk firmware)) ultimately dictates
what's "best" for this number.  I also imagine number of disks plays a
role as well  This is why I advocate not messing with it unless you want
to try and "level out" throughput.

The value itself is literally 1024*1024*1024 (1GB).  Don't think this is
some magic number; it's just what I came up with.  You can search the
FreeBSD lists for references to the variable itself and find other
people advocating values like ~33MByte, etc..  All depends on how you
want the system to behave.

I don't particularly like watching the system behave like I described
(you snipped that portion of my text), which is why I use this variable.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.   PGP 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - abysmal performance with samba since upgrade to 8.2-RELEASE

2011-02-24 Thread Christer Solskogen
On Thu, Feb 24, 2011 at 8:55 AM, Jeremy Chadwick
 wrote:

>
>  # Set TXG write limit to a lower threshold.  This helps "level out"
>  # the throughput rate (see "zpool iostat").  A value of 256MB works well
>  # for systems with 4GB of RAM, while 1GB works well for us w/ 8GB on
>  # disks which have 64MB cache.
>  vfs.zfs.txg.write_limit_override=1073741824
>
>

Sorry if you have said this before, but could you elaborate a bit
about this number? For instance, how much does the cache on the disk
has to say.
In my case: 3x1.5TB raidz with WD15EADS-00R6B0 which has 32MB cache
and 12GB memory. What would you recommend and why.

-- 
chs,
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - abysmal performance with samba since upgrade to 8.2-RELEASE

2011-02-24 Thread Joshua Boyd
On Thu, Feb 24, 2011 at 2:55 AM, Jeremy Chadwick
 wrote:
> On Thu, Feb 24, 2011 at 08:30:17AM +0100, Damien Fleuriot wrote:
>> Hello list,
>>
>> I've recently upgraded my home box from 8.2-PRE to 8.2-RELEASE and since
>> then I've been experiencing *abysmal* performance with samba.
>>
>> We're talking transfer rates of say 50kbytes/s here, and I'm the only
>> client on the box.
>
> I have a similar system with significantly less disks (two pools, one
> disk each; yes, no redundancy).  The system can push, via SMB/CIFS
> across the network about 65-70MBytes/sec, and 80-90MByte/sec via FTP.
> I'll share with you my tunings for Samba, ZFS, and the system.  I spent
> quite some time messing with different values in Samba and FreeBSD to
> find out what got me the "best" performance without destroying the
> system horribly.
>

Hey Jeremy,

Thanks for this post. These settings seem to have fixed my Samba
configuration. I had configured with AIO previously, but apparently my
tunings weren't spot on.  I'd get buffering over 100Mbit with 1080p
video, but 720p video would work just fine. Looks like this is what I
needed to be able to ditch using DLNA for streaming to my Popbox!


-- 
Joshua Boyd
JBipNet

E-mail: boy...@jbip.net
http://www.jbip.net
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - abysmal performance with samba since upgrade to 8.2-RELEASE

2011-02-24 Thread Damien Fleuriot
On 2/24/11 8:55 AM, Jeremy Chadwick wrote:
> On Thu, Feb 24, 2011 at 08:30:17AM +0100, Damien Fleuriot wrote:
>> Hello list,
>>
>> I've recently upgraded my home box from 8.2-PRE to 8.2-RELEASE and since
>> then I've been experiencing *abysmal* performance with samba.
>>
>> We're talking transfer rates of say 50kbytes/s here, and I'm the only
>> client on the box.
> 
> I have a similar system with significantly less disks (two pools, one
> disk each; yes, no redundancy).  The system can push, via SMB/CIFS
> across the network about 65-70MBytes/sec, and 80-90MByte/sec via FTP.
> I'll share with you my tunings for Samba, ZFS, and the system.  I spent
> quite some time messing with different values in Samba and FreeBSD to
> find out what got me the "best" performance without destroying the
> system horribly.
> 
> Please note the amount of memory matters greatly here, so don't go
> blindly setting these if your system has some absurdly small amount of
> physical RAM installed.
> 
> Before getting into what my system has, I also want to make clear that
> there have been cases in the past where people were seeing abysmal
> performance from ZFS, only to find out it was a *single disk* in their
> pool which was causing all of the problems (meaning a single disk was
> performing horribly, impacting everything).  I can try to find the
> mailing list post, but I believe the user offlined the disk (and later
> replaced it) and everything was fast again.  Just a FYI.
> 
> 

[SNIP]

> Good luck.
> 



It is fun because when I wrote my original email, I thought about both
you and mm@


Thanks a lot for your very detailed response, I will try these out and
post feedback :)
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS - abysmal performance with samba since upgrade to 8.2-RELEASE

2011-02-23 Thread Jeremy Chadwick
On Thu, Feb 24, 2011 at 08:30:17AM +0100, Damien Fleuriot wrote:
> Hello list,
> 
> I've recently upgraded my home box from 8.2-PRE to 8.2-RELEASE and since
> then I've been experiencing *abysmal* performance with samba.
> 
> We're talking transfer rates of say 50kbytes/s here, and I'm the only
> client on the box.

I have a similar system with significantly less disks (two pools, one
disk each; yes, no redundancy).  The system can push, via SMB/CIFS
across the network about 65-70MBytes/sec, and 80-90MByte/sec via FTP.
I'll share with you my tunings for Samba, ZFS, and the system.  I spent
quite some time messing with different values in Samba and FreeBSD to
find out what got me the "best" performance without destroying the
system horribly.

Please note the amount of memory matters greatly here, so don't go
blindly setting these if your system has some absurdly small amount of
physical RAM installed.

Before getting into what my system has, I also want to make clear that
there have been cases in the past where people were seeing abysmal
performance from ZFS, only to find out it was a *single disk* in their
pool which was causing all of the problems (meaning a single disk was
performing horribly, impacting everything).  I can try to find the
mailing list post, but I believe the user offlined the disk (and later
replaced it) and everything was fast again.  Just a FYI.


System specifications
===
* Case - Supermicro SC733T-645B
*   MB - Supermicro X7SBA
*  CPU - Intel Core 2 Duo E8400
*  RAM - CT2KIT25672AA800, 4GB ECC
*  RAM - CT2KIT25672AA80E, 4GB ECC
* Disk - Intel X25-V SSD (ada0, boot)
* Disk - WD1002FAEX (ada1, ZFS "data" pool)
* Disk - WD2001FASS (ada2, ZFS "backups" pool)



Samba
===
Rebuild the port (ports/net/samba35) with AIO_SUPPORT enabled.  To use
AIO you will need to load the aio.ko kernel module (kldload aio) first.

Relevant smb.conf tunings:

  [global]
  socket options = TCP_NODELAY SO_SNDBUF=131072 SO_RCVBUF=131072
  use sendfile = no
  min receivefile size = 16384
  aio read size = 16384
  aio write size = 16384
  aio write behind = yes



ZFS pools
===
  pool: backups
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
backups ONLINE   0 0 0
  ada2  ONLINE   0 0 0

errors: No known data errors

  pool: data
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
dataONLINE   0 0 0
  ada1  ONLINE   0 0 0

errors: No known data errors



ZFS tunings
===
Your tunings here are "wild" (meaning all over the place).  Your use
of vfs.zfs.txg.synctime="1" is probably hurting you quite badly, in
addition to your choice to enable prefetching (every ZFS FreeBSD system
I've used has benefit tremendously from having prefetching disabled,
even on systems with 8GB RAM and more).  You do not need to specify
vm.kmem_size_max, so please remove that.  Keeping vm.kmem_size is fine.
Also get rid of your vdev tunings, I'm not sure why you have those.

My relevant /boot/loader.conf tunings for 8.2-RELEASE (note to readers:
the version of FreeBSD you're running, and build date, matters greatly
here so do not just blindly apply these without thinking first):

  # We use Samba built with AIO support; we need this module!
  aio_load="yes"

  # Increase vm.kmem_size to allow for ZFS ARC to utilise more memory.
  vm.kmem_size="8192M"
  vfs.zfs.arc_max="6144M"

  # Disable ZFS prefetching
  # http://southbrain.com/south/2008/04/the-nightmare-comes-slowly-zfs.html
  # Increases overall speed of ZFS, but when disk flushing/writes occur,
  # system is less responsive (due to extreme disk I/O).
  # NOTE: Systems with 8GB of RAM or more have prefetch enabled by
  # default.
  vfs.zfs.prefetch_disable="1"

  # Decrease ZFS txg timeout value from 30 (default) to 5 seconds.  This
  # should increase throughput and decrease the "bursty" stalls that
  # happen during immense I/O with ZFS.
  # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007343.html
  # http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007355.html
  vfs.zfs.txg.timeout="5"



sysctl tunings
===
Please note that the below kern.maxvnodes tuning is based on my system
usage, and yours may vary, so you can remove or comment out this option
if you wish.  The same goes for vfs.ufs.dirhash_maxmem.  As for
vfs.zfs.txg.write_limit_override, I strongly suggest you keep this
commented out for starters; it effectively "rate limits" ZFS I/O, and
this smooths out overall performance (otherwise I was seeing what
appeared to be incredible network transfer speed, then the system would
churn hard for quite some time on physical I/O, then fast network speed,
physical I/O, etc... very "bursty", which I didn't want).

  # Increase send/receive buffer maximums from 256KB to 16MB.
  # 

ZFS - abysmal performance with samba since upgrade to 8.2-RELEASE

2011-02-23 Thread Damien Fleuriot
Hello list,




I've recently upgraded my home box from 8.2-PRE to 8.2-RELEASE and since
then I've been experiencing *abysmal* performance with samba.


We're talking transfer rates of say 50kbytes/s here, and I'm the only
client on the box.


I've seen here and there discussions about sendfile's support, and how
it is recommended to disable it in samba.

Now, the man tells us:
Default: use sendfile = false



On the other side, FTP and SFTP transfers are OK, at 20mbytes/s or so.


Find below a bit of info



smb.conf
---

[global]
   workgroup = MYGROUP
   server string = Samba Server
   security = user
   hosts allow = 192.168.0.1 192.168.1.1 fe80::1
   load printers = no
   log file = /var/log/samba/log.%m
   max log size = 50
# You may want to add the following on a Linux system:
   socket options = SO_RCVBUF=8192 SO_SNDBUF=8192
   dns proxy = no
   bind interfaces only = yes
   interfaces = re0 lagg0
   hide unreadable = yes


[homes]
   comment = Home Directories
   browseable = no
   writable = yes




loader.conf
---

# Tune ZFS somewhat aye ?
vm.kmem_size="3072M"
vm.kmem_size_max="3072M"
vfs.zfs.arc_min="128M"
vfs.zfs.arc_max="2048M"
vfs.zfs.txg.synctime="1"
vfs.zfs.prefetch_disable="0"
vfs.zfs.vdev.min_pending="4"
vfs.zfs.vdev.max_pending="8"





ZFS pool
---

nas# zpool status
  pool: rtank
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rtank ONLINE   0 0 0
  raidz2  ONLINE   0 0 0
gpt/zfs-ada0  ONLINE   0 0 0
gpt/zfs-ada1  ONLINE   0 0 0
gpt/zfs-ada2  ONLINE   0 0 0
gpt/zfs-ada3  ONLINE   0 0 0
gpt/zfs-ad14  ONLINE   0 0 0
gpt/zfs-ad10  ONLINE   0 0 0
gpt/zfs-ad11  ONLINE   0 0 0
gpt/zfs-ad12  ONLINE   0 0 0
gpt/zfs-ad13  ONLINE   0 0 0


zpool version 14
zfs version 3


So , sendfile disabled, ZFS pool not quite updated, good FTP/ssh
performance, horribad samba performance.


Does anyone have an idea of stuff I should be looking at ?

Again, this has only begun since I've upgraded from 8.2-PRE/RC3 to
8.2-RELEASE.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"