[zfs-discuss] Why does ARC grow above hard limit?

2010-04-05 Thread Mike Z
I would appreciate if somebody can clarify  a  few points.

I am doing some random WRITES  (100% writes, 100% random) testing and observe 
that ARC grows way beyond the hard limit during the test. The hard limit is 
set 512 MB via /etc/system and I see the size going up to 1 GB - how come is it 
happening?

mdb's ::memstat reports 1.5 GB used - does this include ARC as well or is it 
separate?

I see on the backed only reads (205 MB/s) and almost no writes (1.1 MB/s) - any 
ides what is being read?

--- BEFORE TEST 
# ~/bin/arc_summary.pl

System Memory:
 Physical RAM:  12270 MB
 Free Memory :  7108 MB
 LotsFree:  191 MB

ZFS Tunables (/etc/system):
 set zfs:zfs_prefetch_disable = 1
 set zfs:zfs_arc_max = 0x2000
 set zfs:zfs_arc_min = 0x1000

ARC Size:
 Current Size: 136 MB (arcsize)
 Target Size (Adaptive):   512 MB (c)
 Min Size (Hard Limit):256 MB (zfs_arc_min)
 Max Size (Hard Limit):512 MB (zfs_arc_max)
...


 ::memstat
Page SummaryPagesMB  %Tot
     
Kernel 800895  3128   25%
ZFS File Data  394450  1540   13%
Anon   106813   4173%
Exec and libs4178160%
Page cache  14333550%
Free (cachelist)22996891%
Free (freelist)   1797511  7021   57%

Total 3141176 12270
Physical  3141175 12270


--- DURING THE TEST
# ~/bin/arc_summary.pl 
System Memory:
 Physical RAM:  12270 MB
 Free Memory :  6687 MB
 LotsFree:  191 MB

ZFS Tunables (/etc/system):
 set zfs:zfs_prefetch_disable = 1
 set zfs:zfs_arc_max = 0x2000
 set zfs:zfs_arc_min = 0x1000

ARC Size:
 Current Size: 1336 MB (arcsize)
 Target Size (Adaptive):   512 MB (c)
 Min Size (Hard Limit):256 MB (zfs_arc_min)
 Max Size (Hard Limit):512 MB (zfs_arc_max)

ARC Size Breakdown:
 Most Recently Used Cache Size:  87%446 MB (p)
 Most Frequently Used Cache Size:12%65 MB (c-p)

ARC Efficency:
 Cache Access Total: 51681761
 Cache Hit Ratio:  52%   27056475   [Defined State for 
buffer]
 Cache Miss Ratio: 47%   24625286   [Undefined State for 
Buffer]
 REAL Hit Ratio:   52%   27056475   [MRU/MFU Hits Only]

 Data Demand   Efficiency:35%
 Data Prefetch Efficiency:DISABLED (zfs_prefetch_disable)

CACHE HITS BY CACHE LIST:
  Anon:   --%Counter Rolled.
  Most Recently Used: 13%3627289 (mru)  [ 
Return Customer ]
  Most Frequently Used:   86%23429186 (mfu) [ 
Frequent Customer ]
  Most Recently Used Ghost:   17%4657584 (mru_ghost)[ 
Return Customer Evicted, Now Back ]
  Most Frequently Used Ghost: 32%8712009 (mfu_ghost)[ 
Frequent Customer Evicted, Now Back ]
CACHE HITS BY DATA TYPE:
  Demand Data:30%8308866 
  Prefetch Data:   0%0 
  Demand Metadata:69%18747609 
  Prefetch Metadata:   0%0 
CACHE MISSES BY DATA TYPE:
  Demand Data:61%15113029 
  Prefetch Data:   0%0 
  Demand Metadata:38%9511898 
  Prefetch Metadata:   0%359 
-
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS behavior under limited resources

2010-04-02 Thread Mike Z
I am trying to see how ZFS behaves under resource starvation - corner cases in 
embedded environments. I see some very strange behavior. Any help/explanation 
would really be appreciated.

My current setup is :
OpenSolaris 111b (iSCSI seems to be broken in 132 - unable to get multiple 
connections/mutlipathing)
iSCSI Storage Array that is capable of 
20 MB/s random writes @ 4k and 70 MB random reads @ 4k
150 MB/s random writes @ 128k and 180 MB/S random reads @ 128K
180+ MB/S for sequntial reads and write at both 4k and 128k.
8 Intel CPU and 12 GB of RAM (DELL poweredge 610)

The ARC size is limited to 512MB (hard limit). No L2 Cache.

In both test below the file system size is about 300 GB. This file system 
conatins a single directory  with about 15'000 files totalling to 200 GB (so 
the file system is 2/3 full). The tests are run within the same directory.

Test 1:
Random writes @ 4k to 1000 1MB files (1000 threads, 1 per file).

First I observe that ARC size grows (momentarily) above 512 MB limit (via kstat 
and arcstat.pl).
Q: It seems that zfs:zfs_arc_max is not really a hard limit?

I tried setting primarycache to none, metadata and all. The I/O reported is 
similar in the NONE and METADATA case (17 MB/S) while when set to ALL, I/O is 3 
- 4 time less (4-5 MB/S).
Q: Any explanation would be useful.

In this test I observe for backend on average I/O is 132 MB/s for READs and 51 
MB/s WRITES
Q: Why is more read than wtritten?

Test 2:
Random writes @ 4k to 10'000 1MB files (10'000 threads, 1 per file).

- ARC size now goes to 1 GB during the entire test (way above the hard limit)

- ::memstat reports that zfs grew from the original 430 MB to about 1.5 GB
Q: Does mdb memstat reporting include ARC?

Q: On the backend I see 170 MB/s reads and 0.5 MB.s writes -- What is happening 
here?



SOME sample output ...

---
 ::memstat
Page SummaryPagesMB  %Tot
     
Kernel 800933  3128   25%
ZFS File Data  394450  1540   13%
Anon   128909   5034%
Exec and libs4172160%
Page cache  14749570%
Free (cachelist)21884851%
Free (freelist)   1776079  6937   57%

Total 3141176 12270
Physical  3141175 12270

--
System Memory:
 Physical RAM:  12270 MB
 Free Memory :  6966 MB
 LotsFree:  191 MB

ZFS Tunables (/etc/system):
 set zfs:zfs_prefetch_disable = 1
 set zfs:zfs_arc_max = 0x2000
 set zfs:zfs_arc_min = 0x1000

ARC Size:
 Current Size: 669 MB (arcsize)
 Target Size (Adaptive):   512 MB (c)
 Min Size (Hard Limit):256 MB (zfs_arc_min)
 Max Size (Hard Limit):512 MB (zfs_arc_max)

ARC Size Breakdown:
 Most Recently Used Cache Size:   6%32 MB (p)
 Most Frequently Used Cache Size:93%480 MB (c-p)

ARC Efficency:
 Cache Access Total: 47002757
 Cache Hit Ratio:  52%   24657634   [Defined State for 
buffer]
 Cache Miss Ratio: 47%   22345123   [Undefined State for 
Buffer]
 REAL Hit Ratio:   52%   24657634   [MRU/MFU Hits Only]

 Data Demand   Efficiency:36%
 Data Prefetch Efficiency:DISABLED (zfs_prefetch_disable)

CACHE HITS BY CACHE LIST:
  Anon:   --%Counter Rolled.
  Most Recently Used: 13%3420349 (mru)  [ 
Return Customer ]
  Most Frequently Used:   86%21237285 (mfu) [ 
Frequent Customer ]
  Most Recently Used Ghost:   16%4057965 (mru_ghost)[ 
Return Customer Evicted, Now Back ]
  Most Frequently Used Ghost: 31%7837353 (mfu_ghost)[ 
Frequent Customer Evicted, Now Back ]
CACHE HITS BY DATA TYPE:
  Demand Data:31%7793822 
  Prefetch Data:   0%0 
  Demand Metadata:68%16863812 
  Prefetch Metadata:   0%0 
CACHE MISSES BY DATA TYPE:
  Demand Data:60%13573358 
  Prefetch Data:   0%0 
  Demand Metadata:39%8771406 
  Prefetch Metadata:   0%359 
-
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss