> On Fri, 26 Jun 2009, Richard Elling wrote:
> 
> >> All the tools I have used show no IO problems. I
> think the problem is 
> >> memory but I am unsure on how to troubleshoot it.
> >
> > Look for latency, not bandwidth.  iostat will show
> latency at the
> > device level.
> 
> Unfortunately, the effect may not be all that obvious
> since the disks 
> will only be driven as hard as the slowest disk and
> so the slowest 
> disk may not seem much slower.
> 
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us,
> http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,
>    http://www.GraphicsMagick.org/
> ____________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss

I checked the output of iostat. svc_t is between 5 and 50, depending on when 
data is flushed to the disk (CIFS write pattern). %b is between 10 and 50.
%w is always 0.
Example:
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
sd27     31.5  127.0  935.9  616.7  0.0 11.9   75.2   0  66
sd28      5.0    0.0  320.0    0.0  0.0  0.1   18.0   0   9

This tells me disks are busy but I do not know what they are doing? are they 
spending time seeking, writting or reading?

I also review some ARC stats. Here is the output.
ARC Efficency:
         Cache Access Total:             199758875
         Cache Hit Ratio:      74%       148652045      [Defined State for 
buffer]
         Cache Miss Ratio:     25%       51106830       [Undefined State for 
Buffer]
         REAL Hit Ratio:       73%       146091795      [MRU/MFU Hits Only]

         Data Demand   Efficiency:    94%
         Data Prefetch Efficiency:    15%

        CACHE HITS BY CACHE LIST:
          Anon:                       --%        Counter Rolled.
          Most Recently Used:         22%        33843327 (mru)         [ 
Return Customer ]
          Most Frequently Used:       75%        112248468 (mfu)        [ 
Frequent Customer ]
          Most Recently Used Ghost:    3%        4833189 (mru_ghost)    [ 
Return Customer Evicted, Now Back ]
          Most Frequently Used Ghost: 22%        33831706 (mfu_ghost)   [ 
Frequent Customer Evicted, Now Back ]


It seems to me that mfu_ghost being at 22%, I may need a bigger ARC.
Is ARC also designed to work with large memory foot prints (128GB for example 
or higher)? Will it be as efficient?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to