[zfs-discuss] Re: Deterioration with zfs performance and recent zfs bits?

2007-06-04 Thread Jürgen Keil
I wrote

 Instead of compiling opensolaris for 4-6 hours, I've now used
 the following find / grep test using on-2007-05-30 sources:
 
 1st test using Nevada build 60:
 
 % cd /files/onnv-2007-05-30
 % repeat 10 /bin/time find usr/src/ -name *.[hc] -exec grep FooBar {} +

This find + grep command basically

- does a recursive scan looking for *.h and *.c files
- at the end of the recursive directory scan invokes one grep
  command with ~ 2 filename args.


Simplifying the test a bit more:  snv_60 is able to cache all meta-data
for a compiled onnv source tree, on a 32-bit x86 machine with
768 mb of physical memory:

% cd /files/wos_b67
% repeat 10 sh -c /bin/time find usr/src/ -name '*.[hc]' -print|wc

real 2:11.7
user0.2
sys 3.2
   19355   19355  772864

real2.4
user0.1
sys 1.4
   19355   19355  772864

real2.2
user0.1
sys 1.5
   19355   19355  772864

real2.0
user0.1
sys 1.4
   19355   19355  772864

real 1:21.8   seems that some meta data was freed 
here...
user0.2
sys 1.7
   19355   19355  772864

real 1:21.0
user0.2
sys 1.7
   19355   19355  772864

real   45.9
user0.1
sys 1.6
   19355   19355  772864

real3.2
user0.1
sys 1.3
   19355   19355  772864

real1.9
user0.1
sys 1.3
   19355   19355  772864

real2.8
user0.1
sys 1.3
   19355   19355  772864


(and the next 10 finds all completed in ~2 seconds per find)


build 67 is unable to cache the meta-data, for the same find
command on the same zfs:

% cd /files/wos_b67
% repeat 10 sh -c /bin/time find usr/src/ -name '*.[hc]' -print|wc

real 3:20.7
user0.5
sys 7.5
   19355   19355  772864

real 3:07.0
user0.5
sys 5.5
   19355   19355  772864

real 2:44.6
user0.5
sys 4.7
   19355   19355  772864

real 2:06.1
user0.4
sys 3.9
   19355   19355  772864

real 1:16.1
user0.4
sys 3.5
   19355   19355  772864

real   33.0
user0.4
sys 2.7
   19355   19355  772864

real   40.8
user0.4
sys 3.0
   19355   19355  772864

real   18.8
user0.3
sys 2.6
   19355   19355  772864

real 2:32.2
user0.4
sys 4.2
   19355   19355  772864

real 2:05.4
user0.4
sys 3.9
   19355   19355  772864
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Deterioration with zfs performance and recent zfs bits?

2007-06-01 Thread Jürgen Keil
I wrote

 Has anyone else noticed a significant zfs performance
 deterioration when running recent opensolaris bits?
 
 My 32-bit / 768 MB Toshiba Tecra S1 notebook was able
 to do a full opensolaris release build in ~ 4 hours 45
 minutes (gcc shadow compilation disabled; using an lzjb
 compressed zpool / zfs on a single notebook hdd p-ata drive).
 
 After upgrading to 2007-05-25 opensolaris release
 bits (compiled from source), the same release build now
 needs ~ 6 hours; that's ~ 25% slower.

It might be Bug ID 6469558
ZFS prefetch needs to be more aware of memory pressure:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6469558


Instead of compiling opensolaris for 4-6 hours, I've now used
the following find / grep test using on-2007-05-30 sources:


1st test using Nevada build 60:

% cd /files/onnv-2007-05-30
% repeat 10 /bin/time find usr/src/ -name *.[hc] -exec grep FooBar {} +
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 4:22.5
user3.3
sys 5.8
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 4:28.4
user3.3
sys 4.8
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 4:18.0
user3.3
sys 4.7
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 4:17.3
user3.3
sys 4.8
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 4:15.0
user3.3
sys 4.7
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 4:12.0
user3.3
sys 4.7
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 4:21.9
user3.3
sys 4.7
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 4:18.7
user3.3
sys 4.7
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 4:19.5
user3.3
sys 4.7
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 4:17.2
user3.3
sys 4.7


Same test, but running onnv-2007-05-30 release bits
(compiled from source).  This is at least 25% slower
than snv_60:


(Note: zfs_prefetch_disable = 0 , the default value)

% repeat 10 /bin/time find usr/src/ -name *.[hc] -exec grep FooBar {} +
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 8:04.3
user7.3
sys13.2
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 6:34.4
user7.3
sys11.2
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 6:33.8
user7.3
sys11.1
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 5:35.6
user7.3
sys10.6
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 5:39.8
user7.3
sys10.6
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 5:37.8
user7.3
sys11.1
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 5:53.5
user7.3
sys11.0
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 5:45.2
user7.3
sys11.1
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 5:44.8
user7.3
sys11.0
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 5:49.1
user7.3
sys11.0



Then I patched zfs_prefetch_disable/W1, and now 
the find  grep test runs much faster on
onnv-2007-05-30 bits:

(Note: zfs_prefetch_disable = 1)

% repeat 10 /bin/time find usr/src/ -name *.[hc] -exec grep FooBar {} +
usr/src/lib/pam_modules/authtok_check/authtok_check.c:   * user entering 
FooBar1234 with PASSLENGTH=6, MINDIGIT=4, while

real 4:01.3
user7.2
sys 9.9

Re: [zfs-discuss] Re: Deterioration with zfs performance and recent zfs bits?

2007-06-01 Thread Rob Logan


 Patching zfs_prefetch_disable = 1 has helped

It's my belief this mainly aids scanning metadata. my
testing with rsync and yours with find (and seen with
du  ; zpool iostat -v 1 ) pans this out..
mainly tracked in bug 6437054 vdev_cache: wise up or die
http://www.opensolaris.org/jive/thread.jspa?messageID=42212

so to link your code, it might help, but if one ran
a clean down the tree, it would hurt compile times.

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss