I have noticed a strange issue on two idle Solaris 10 U6 systems where 
the reported size of the ARC cache is less than c_min. I was under the 
impression that c_min is a hard limit on the minimum size of the ARC 
cache [1]. As an example, I have included the output of arc_summary[2] 
and kstat[3] for one of the systems.

Is the formula "arcstats:size >= arcstats:c_min" always true? Or are 
there legitimate cases when the ARC cache will shrink below the c_min limit.

Thanks,

  - Mark


[1]

http://mail.opensolaris.org/pipermail/perf-discuss/2009-June/002312.html


[1] Excerpt from Ben Rockwood's arc_summary:

System Memory:
         Physical RAM:  32046 MB
         Free Memory :  4255 MB
         LotsFree:      496 MB

ZFS Tunables (/etc/system):

ARC Size:
         Current Size:             288 MB (arcsize)
         Target Size (Adaptive):   25728 MB (c)
         Min Size (Hard Limit):    3877 MB (zfs_arc_min)
         Max Size (Hard Limit):    31022 MB (zfs_arc_max)


[2] Output of "kstat zfs:0:arcstats"

module: zfs                             instance: 0
name:   arcstats                        class:    misc
        c                               26978509172
        c_max                           32529489920
        c_min                           4066186240
        crtime                          2251383.06747617
        data_size                       166427648
        deleted                         20710
        demand_data_hits                1708848
        demand_data_misses              161416
        demand_metadata_hits            2532283
        demand_metadata_misses          129957
        evict_skip                      455
        hash_chain_max                  7
        hash_chains                     66762
        hash_collisions                 221681
        hash_elements                   323508
        hash_elements_max               330316
        hdr_size                        67452840
        hits                            4582847
        l2_abort_lowmem                 0
        l2_cksum_bad                    0
        l2_evict_lock_retry             0
        l2_evict_reading                0
        l2_feeds                        0
        l2_free_on_write                0
        l2_hdr_size                     0
        l2_hits                         0
        l2_io_error                     0
        l2_misses                       0
        l2_read_bytes                   0
        l2_rw_clash                     0
        l2_size                         0
        l2_write_bytes                  0
        l2_writes_done                  0
        l2_writes_error                 0
        l2_writes_hdr_miss              0
        l2_writes_sent                  0
        memory_throttle_count           0
        mfu_ghost_hits                  630167
        mfu_hits                        3166923
        misses                          1090660
        mru_ghost_hits                  137293
        mru_hits                        1076653
        mutex_miss                      2290
        other_size                      69767200
        p                               3228355146
        prefetch_data_hits              5940
        prefetch_data_misses            761634
        prefetch_metadata_hits          335776
        prefetch_metadata_misses        37653
        recycle_miss                    3549
        size                            303647688
        snaptime                        3616598.76049297

Reply via email to