Re: ZFS performance strangeness

2011-04-25 Thread krad
On 24 April 2011 17:21, Sergio de Almeida Lenzi lenzi.ser...@gmail.comwrote:

 Em Ter, 2011-04-12 às 13:33 +0200, Lars Wilke escreveu:

  Hi,
 
  There are quite a few threads about ZFS and performance difficulties,
  but i did not find anything that really helped :)
  Therefor any advice would be highly appreciated.
  I started to use ZFS with 8.1R, only tuning i did was setting
 
  vm.kmem_size_scale=1
  vfs.zfs.arc_max=4M

 For me I solved the ZFS performace in FreeBSD and postgres databases
 (about 100GB size)
 by tunning vm.kmem_size to atout 3/4 of the ram size...
 in your case, vm.kmem_size=(48 *3/4)=36G, it puts almost all the
 database
 in memory and it is now lightning fast...
 I use to disable prefetch in zfs.. too

 Hope this can help,

 Sergio
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to 
 freebsd-questions-unsubscr...@freebsd.org



wouldnt it be better to allow the db to use the memory rather than zfs, as
this would involve far less context switches?
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS performance strangeness

2011-04-24 Thread Sergio de Almeida Lenzi
Em Ter, 2011-04-12 às 13:33 +0200, Lars Wilke escreveu:

 Hi,
 
 There are quite a few threads about ZFS and performance difficulties,
 but i did not find anything that really helped :)
 Therefor any advice would be highly appreciated.
 I started to use ZFS with 8.1R, only tuning i did was setting
 
 vm.kmem_size_scale=1
 vfs.zfs.arc_max=4M

For me I solved the ZFS performace in FreeBSD and postgres databases
(about 100GB size) 
by tunning vm.kmem_size to atout 3/4 of the ram size...
in your case, vm.kmem_size=(48 *3/4)=36G, it puts almost all the
database 
in memory and it is now lightning fast...
I use to disable prefetch in zfs.. too 

Hope this can help,

Sergio
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS performance strangeness

2011-04-21 Thread Lars Wilke
Hi,

thanks to all who answered.

* Damien Fleuriot wrote:
  I refer you to this post by Jeremy Chadwick with tuning values *AND*
  their actual explanation.

  http://lists.freebsd.org/pipermail/freebsd-stable/2011-February/061642.html

Good post indeed, but i think i found my problem elsewhere, though i have to 
admit
that i don't really understand what is happening.

The Citrix XenServers 5.6 boxes are connected with 2x1GB, using bonding with
balance-slb, to an HP ProCurve 2910al-24G Switch.

It seems this balance-slb mode is patch from Citrix and i could not come
up with some documentation for it. Only thing i know it is a modified
form of balance-alb.

The FreeBSD box is also connected to this switch with two NICs,
one em and one igb driven NIC. If i do not use lagg and assign just
a single address to one of the NICs on FBSD, everything works well.

I do only set vm.kmem_size_scale and vfs.zfs.arc_max and get r/w speed
of a little bit above 100MB/s when doing: dd bs=1M count=4096 ...
and remounting before reading or writting.

But if i do use lagg in loadbalance or in lacp mode (here also
enabled lacp on the necessary switch ports) i see via a tcpdump, that
ip packets seem to get lost!? On top of that this only happens when
reading data via NFS, writting is fine.

Then again lagg in failover mode seems to work ok.

The NICs themself look fine and are automatically set to 1000baseT full
duplex. There are also no errors reported via netstat.

I am not sure if the switch does not like playing with the arp
addresses on its ports or if it has something to do with the slb
implementation from citrix. When i have a downtime for the xenserver
i will reconfigure it to use lacp, then lets see if that works.

cheers
   --lars

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS performance strangeness

2011-04-14 Thread Ivan Voras

On 12/04/2011 13:33, Lars Wilke wrote:


Now i upgraded one machine to 8.2R and i get very good write performance
over NFS but read performance drops to a ridiciously low value, around
1-2 MB/s. While writes are around 100MB/s. The network is a dedicated


If you don't get any answer here, try posting on the freebsd-fs @ 
freebsd.org list.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS performance strangeness

2011-04-14 Thread Damien Fleuriot


On 4/12/11 1:33 PM, Lars Wilke wrote:
 Hi,
 
 There are quite a few threads about ZFS and performance difficulties,
 but i did not find anything that really helped :)
 Therefor any advice would be highly appreciated.
 I started to use ZFS with 8.1R, only tuning i did was setting
 
 vm.kmem_size_scale=1
 vfs.zfs.arc_max=4M
 
 The machines are supermicro boards with 48 GB ECC RAM and 15k RPM SAS
 drives. Local read/write performance was and is great.
 But exporting via NFS was a mixed bag in 8.1R.
 Generally r/w speed over NFS was ok, but large reads or writes took
 ages. Most of the reads and writes were small, so i did not bother.
 
 Now i upgraded one machine to 8.2R and i get very good write performance
 over NFS but read performance drops to a ridiciously low value, around
 1-2 MB/s. While writes are around 100MB/s. The network is a dedicated
 1GB Ethernet. The zpool uses RAIDZ1 over 7 drives, one vdev.
 The filesystem has compression enabled. Turning it off made no
 difference AFAICT
 
 Now i tried a few of the suggested tunables and my last try was this
 
 vfs.zfs.txg.timeout=5
 vfs.zfs.prefetch_disable=1
 vfs.zfs.txg.synctime=2
 fs.zfs.vdev.min_pending=1
 fs.zfs.vdev.max_pending=1
 
 still no luck. Writting is fast, reading is not. Even with enabled
 prefetching. The only thing i noticed is, that reading for example 10MB
 is fast (on a freshly mounted fs) but when reading larger amounts, i.e.
 couple hundred MBs, the performance drops and zpool iostat or iostat -x
 show that there is not much activity on the zpool/hdds.
 
 It seems as if ZFS does not care that someone wants to read data, also idle
 time of the reading process happily ticks up and gets higher and higher!?
 When trying to access the file during this time, the process blocks and
 sometimes is difficult to kill, i.e. ls -la on the file.
 
 I read and write with dd and before read tests i umount and mount the
 NFS share again.
 
 dd if=/dev/zero of=/mnt/bla size=1M count=X
 dd if=/mnt/bla of=/dev/null size=1M count=Y
 
 mount is done with this options from two centos 5 boxes:
 rw,noatime,tcp,bg,intr,hard,nfsvers=3,noacl,nocto
 
 thanks
--lars
 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org



I refer you to this post by Jeremy Chadwick with tuning values *AND*
their actual explanation.

http://lists.freebsd.org/pipermail/freebsd-stable/2011-February/061642.html
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


ZFS performance strangeness

2011-04-12 Thread Lars Wilke
Hi,

There are quite a few threads about ZFS and performance difficulties,
but i did not find anything that really helped :)
Therefor any advice would be highly appreciated.
I started to use ZFS with 8.1R, only tuning i did was setting

vm.kmem_size_scale=1
vfs.zfs.arc_max=4M

The machines are supermicro boards with 48 GB ECC RAM and 15k RPM SAS
drives. Local read/write performance was and is great.
But exporting via NFS was a mixed bag in 8.1R.
Generally r/w speed over NFS was ok, but large reads or writes took
ages. Most of the reads and writes were small, so i did not bother.

Now i upgraded one machine to 8.2R and i get very good write performance
over NFS but read performance drops to a ridiciously low value, around
1-2 MB/s. While writes are around 100MB/s. The network is a dedicated
1GB Ethernet. The zpool uses RAIDZ1 over 7 drives, one vdev.
The filesystem has compression enabled. Turning it off made no
difference AFAICT

Now i tried a few of the suggested tunables and my last try was this

vfs.zfs.txg.timeout=5
vfs.zfs.prefetch_disable=1
vfs.zfs.txg.synctime=2
fs.zfs.vdev.min_pending=1
fs.zfs.vdev.max_pending=1

still no luck. Writting is fast, reading is not. Even with enabled
prefetching. The only thing i noticed is, that reading for example 10MB
is fast (on a freshly mounted fs) but when reading larger amounts, i.e.
couple hundred MBs, the performance drops and zpool iostat or iostat -x
show that there is not much activity on the zpool/hdds.

It seems as if ZFS does not care that someone wants to read data, also idle
time of the reading process happily ticks up and gets higher and higher!?
When trying to access the file during this time, the process blocks and
sometimes is difficult to kill, i.e. ls -la on the file.

I read and write with dd and before read tests i umount and mount the
NFS share again.

dd if=/dev/zero of=/mnt/bla size=1M count=X
dd if=/mnt/bla of=/dev/null size=1M count=Y

mount is done with this options from two centos 5 boxes:
rw,noatime,tcp,bg,intr,hard,nfsvers=3,noacl,nocto

thanks
   --lars

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org