I am trying to see how ZFS behaves under resource starvation - corner cases in
embedded environments. I see some very strange behavior. Any help/explanation
would really be appreciated.
My current setup is :
OpenSolaris 111b (iSCSI seems to be broken in 132 - unable to get multiple
connections/mutlipathing)
iSCSI Storage Array that is capable of
20 MB/s random writes @ 4k and 70 MB random reads @ 4k
150 MB/s random writes @ 128k and 180 MB/S random reads @ 128K
180+ MB/S for sequntial reads and write at both 4k and 128k.
8 Intel CPU and 12 GB of RAM (DELL poweredge 610)
The ARC size is limited to 512MB (hard limit). No L2 Cache.
In both test below the file system size is about 300 GB. This file system
conatins a single directory with about 15'000 files totalling to 200 GB (so
the file system is 2/3 full). The tests are run within the same directory.
Test 1:
Random writes @ 4k to 1000 1MB files (1000 threads, 1 per file).
First I observe that ARC size grows (momentarily) above 512 MB limit (via kstat
and arcstat.pl).
Q: It seems that zfs:zfs_arc_max is not really a hard limit?
I tried setting primarycache to none, metadata and all. The I/O reported is
similar in the NONE and METADATA case (17 MB/S) while when set to ALL, I/O is 3
- 4 time less (4-5 MB/S).
Q: Any explanation would be useful.
In this test I observe for backend on average I/O is 132 MB/s for READs and 51
MB/s WRITES
Q: Why is more read than wtritten?
Test 2:
Random writes @ 4k to 10'000 1MB files (10'000 threads, 1 per file).
- ARC size now goes to 1 GB during the entire test (way above the hard limit)
- ::memstat reports that zfs grew from the original 430 MB to about 1.5 GB
Q: Does mdb memstat reporting include ARC?
Q: On the backend I see 170 MB/s reads and 0.5 MB.s writes -- What is happening
here?
SOME sample output ...
---
::memstat
Page SummaryPagesMB %Tot
Kernel 800933 3128 25%
ZFS File Data 394450 1540 13%
Anon 128909 5034%
Exec and libs4172160%
Page cache 14749570%
Free (cachelist)21884851%
Free (freelist) 1776079 6937 57%
Total 3141176 12270
Physical 3141175 12270
--
System Memory:
Physical RAM: 12270 MB
Free Memory : 6966 MB
LotsFree: 191 MB
ZFS Tunables (/etc/system):
set zfs:zfs_prefetch_disable = 1
set zfs:zfs_arc_max = 0x2000
set zfs:zfs_arc_min = 0x1000
ARC Size:
Current Size: 669 MB (arcsize)
Target Size (Adaptive): 512 MB (c)
Min Size (Hard Limit):256 MB (zfs_arc_min)
Max Size (Hard Limit):512 MB (zfs_arc_max)
ARC Size Breakdown:
Most Recently Used Cache Size: 6%32 MB (p)
Most Frequently Used Cache Size:93%480 MB (c-p)
ARC Efficency:
Cache Access Total: 47002757
Cache Hit Ratio: 52% 24657634 [Defined State for
buffer]
Cache Miss Ratio: 47% 22345123 [Undefined State for
Buffer]
REAL Hit Ratio: 52% 24657634 [MRU/MFU Hits Only]
Data Demand Efficiency:36%
Data Prefetch Efficiency:DISABLED (zfs_prefetch_disable)
CACHE HITS BY CACHE LIST:
Anon: --%Counter Rolled.
Most Recently Used: 13%3420349 (mru) [
Return Customer ]
Most Frequently Used: 86%21237285 (mfu) [
Frequent Customer ]
Most Recently Used Ghost: 16%4057965 (mru_ghost)[
Return Customer Evicted, Now Back ]
Most Frequently Used Ghost: 31%7837353 (mfu_ghost)[
Frequent Customer Evicted, Now Back ]
CACHE HITS BY DATA TYPE:
Demand Data:31%7793822
Prefetch Data: 0%0
Demand Metadata:68%16863812
Prefetch Metadata: 0%0
CACHE MISSES BY DATA TYPE:
Demand Data:60%13573358
Prefetch Data: 0%0
Demand Metadata:39%8771406
Prefetch Metadata: 0%359
-
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss