Since we were drowning, we decided to go ahead and reboot with my 
guesses, even though I have not heard and expert opinions on the 
changes.  (Also, 3 mins was way under estimated. It takes 12 minutes to 
reboot our x4500).

The new values are:      (original)

set bufhwm_pct=10        (2%)
set maxusers=4096        (2048)
set ndquot=5048000       (50480)
set ncsize=1038376       (129797)
set ufs_ninode=1038376   (129797)


It does appear to run more better, but it hard to tell. 7 out of 10 
tries, statvfs64 takes less than 2seconds, but I did get as high as 14s.

However, 2 hours later the x4500 hung. Pingable, but no console, nor NFS 
response. The "LOM" was fine, and I performed a remote reboot.

Since then it has stayed up 5 hours.

    PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP 

    521 daemon   7404K 6896K sleep   60  -20   0:25:03 3.1% nfsd/754
Total: 1 processes, 754 lwps, load averages: 0.82, 0.79, 0.79

CPU states: 90.6% idle,  0.0% user,  9.4% kernel,  0.0% iowait,  0.0% swap
Memory: 16G real, 829M free, 275M swap in use, 16G swap free


  10191915 total name lookups (cache hits 82%)

         maxsize                         1038376
         maxsize reached                 993770

(Increased it by nearly x10 and it still gets a high 'reached').


Lund


Jorgen Lundman wrote:
> We are having slow performance with the UFS volumes on the x4500. They
> are slow even on the local server. Which makes me think it is (for once) 
> not NFS related.
> 
> 
> Current settings:
> 
> SunOS x4500-01.unix 5.11 snv_70b i86pc i386 i86pc
> 
> # cat /etc/release
>                  Solaris Express Developer Edition 9/07 snv_70b X86
>             Copyright 2007 Sun Microsystems, Inc.  All Rights Reserved.
>                          Use is subject to license terms.
>                              Assembled 30 August 2007
> 
> NFSD_SERVERS=1024
> LOCKD_SERVERS=128
> 
>     PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
> 
>   12249 daemon   7204K 6748K sleep   60  -20  54:16:26  14% nfsd/731
> 
> load averages:  2.22,  2.32,  2.42     12:31:35
> 63 processes:  62 sleeping, 1 on cpu
> CPU states: 68.7% idle,  0.0% user, 31.3% kernel,  0.0% iowait,  0.0% swap
> Memory: 16G real, 1366M free, 118M swap in use, 16G swap free
> 
> 
> /etc/system:
> 
> set ndquot=5048000
> 
> 
> We have a setup like:
> 
> /export/zfs1
> /export/zfs2
> /export/zfs3
> /export/zfs4
> /export/zfs5
> /export/zdev/vol1/ufs1
> /export/zdev/vol2/ufs2
> /export/zdev/vol3/ufs3
> 
> What is interesting is that if I run "df", it will display everything at 
> normal speed, but pause before "vol1/ufs1" file system. truss confirms 
> that statvfs64() is slow (5 seconds usually). All other ZFS and UFS 
> filesystems behave normally. vol1/ufs1 is the most heavily used UFS 
> filesystem.
> 
> Disk:
> /dev/zvol/dsk/zpool1/ufs1
>                         991G   224G   758G    23%    /export/ufs1
> 
> Inodes:
> /dev/zvol/dsk/zpool1/ufs1
>                      37698475 25044053    60%   /export/ufs1
> 
> 
> 
> 
> Possible problems:
> 
> # vmstat -s
> 866193018 total name lookups (cache hits 57%)
> 
> # kstat -n inode_cache
> module: ufs                             instance: 0
> name:   inode_cache                     class:    ufs
>       maxsize                         129797
>       maxsize reached                 269060
>       thread idles                    319098740
>       vget idles                      62136
> 
> 
> This leads me to think we should consider setting;
> 
> set ncsize=259594        (doubled... are there better values?)
> set ufs_ninode=259594
> 
> in /etc/system, and reboot. But it is costly to reboot based only on my
> guess. Do you have any other suggestions to explore? Will this help?
> 
> 
> Sincerely,
> 
> Jorgen Lundman
> 
> 

-- 
Jorgen Lundman       | <[EMAIL PROTECTED]>
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo    | +81 (0)90-5578-8500          (cell)
Japan                | +81 (0)3 -3375-1767          (home)
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to