Been noticing recently that kernel memory steadily climbs... After boot, it starts at about 9% and after a single gate build it's over 22%.
I've seen it go over 40% after a medium sized pkgsrc bulk build. > richard@omnis:/home/richard/src/illumos-gate$ echo "::memstat;::kmastat" > |pfexec mdb -k > Page Summary Pages MB %Tot > ------------ ---------------- ---------------- ---- > Kernel 1836820 7175 22% > ZFS File Data 3151588 12310 38% > Anon 153146 598 2% > Exec and libs 3038 11 0% > Page cache 31056 121 0% > Free (cachelist) 135099 527 2% > Free (freelist) 3074327 12009 37% > > Total 8385074 32754 > Physical 8385072 32754 > cache buf buf buf memory alloc > alloc > name size in use total in use succeed > fail > ------------------------------ ----- --------- --------- ------ ---------- > ----- >... > Total [hat_memload] 67.3M 418468801 > 0 > Total [kmem_msb] 1.41G 30612937 > 0 > Total [kmem_firewall] 867M 11302480 > 0 > Total [kmem_va] 1.27G 363007 > 0 > Total [kmem_default] 1.38G 1610322588 > 0 > Total [kmem_io_64G] 76M 9728 > 0 > Total [kmem_io_4G] 44K 41 > 0 > Total [kmem_io_2G] 12K 5 > 0 > Total [bp_map] 0 3 > 0 > Total [umem_np] 0 498 > 0 > Total [id32] 4K 83 > 0 > Total [zfs_file_data] 764M 51006 > 0 > Total [zfs_file_data_buf] 12.0G 1164166 > 0 > Total [segkp] 448K 670751 > 0 > Total [ip_minor_arena_sa] 64 2008 > 0 > Total [ip_minor_arena_la] 64 1124 > 0 > Total [spdsock] 0 1 > 0 > Total [namefs_inodes] 64 262 > 0 > ------------------------------ ----- --------- --------- ------ ---------- > ----- > > vmem memory memory memory alloc > alloc > name in use total import succeed > fail > ------------------------------ --------- ---------- --------- ---------- > ----- > heap 6.20G 987G 0 11389993 > 0 > vmem_metadata 636M 636M 636M 39905 > 0 > vmem_seg 621M 621M 621M 39762 > 0 > vmem_hash 14.0M 14.0M 14.0M 72 > 0 > vmem_vmem 288K 320K 284K 99 > 0 > static 0 0 0 0 > 0 > static_alloc 0 0 0 0 > 0 > hat_memload 67.3M 67.3M 67.3M 17833 > 0 > kstat 786K 824K 760K 2479 > 0 > kmem_metadata 1.44G 1.44G 1.44G 369853 > 0 > kmem_msb 1.41G 1.41G 1.41G 369392 > 0 > kmem_cache 586K 604K 604K 541 > 0 > kmem_hash 36.6M 36.7M 36.7M 827 > 0 > kmem_log 1.23G 1.23G 1.23G 12 > 0 > kmem_firewall_va 1.09G 1.09G 1.09G 11302795 > 0 > kmem_firewall 867M 867M 867M 11302491 > 0 > kmem_oversize 254M 255M 255M 305 > 0 > mod_sysfile 275 4K 4K 8 > 0 > kmem_va 1.39G 1.39G 1.39G 11580 > 0 > kmem_default 1.38G 1.38G 1.38G 364168 > 0 > kmem_io_64G 76M 76M 76M 9728 > 0 > kmem_io_4G 44K 44K 44K 11 > 0 > kmem_io_2G 68K 68K 68K 82 > 0 > kmem_io_16M 0 0 0 0 > 0 > bp_map 0 0 0 205 > 0 > umem_np 0 0 0 427 > 0 > ksyms 2.13M 2.39M 2.39M 612 > 0 > ctf 964K 1.09M 1.09M 607 > 0 > heap_core 1.93M 888M 0 61 > 0 > heaptext 9.83M 64M 0 220 > 0 > module_text 11.4M 11.8M 9.83M 612 > 0 > id32 4K 4K 4K 1 > 0 > module_data 1.22M 2.22M 1.93M 759 > 0 > logminor_space 27 256K 0 35 > 0 > taskq_id_arena 122 2.00G 0 215 > 0 > zfs_file_data 12.1G 32.0G 0 105236 > 0 > zfs_file_data_buf 12.0G 12.0G 12.0G 150110 > 0 > device 1.65M 1G 0 33002 > 0 > segkp 32.0M 2G 0 28817 > 0 > mac_minor_ids 8 127K 0 9 > 0 > rctl_ids 41 32.0K 0 41 > 0 > zoneid_space 0 9.76K 0 0 > 0 > taskid_space 40 977K 0 88 > 0 > pool_ids 1 977K 0 1 > 0 > contracts 43 2.00G 0 94 > 0 > ddi_periodic 0 1023 0 0 > 0 > ip_minor_arena_sa 64 256K 0 17 > 0 > ip_minor_arena_la 64 4.00G 0 14 > 0 > lport-instances 0 64K 0 0 > 0 > rport-instances 0 64K 0 0 > 0 > ibcm_local_sid 0 4.00G 0 0 > 0 > ibcm_ip_sid 0 64.0K 0 0 > 0 > lib_va_32 7.68M 1.99G 0 20 > 0 > tl_minor_space 288 256K 0 1214 > 0 > keysock 0 4.00G 0 0 > 0 > spdsock 0 4.00G 0 1 > 0 > namefs_inodes 64 64K 0 1 > 0 > lib_va_64 105M 125T 0 608 > 0 > Hex0xffffff0961518488_minor 0 4.00G 0 0 > 0 > Hex0xffffff0961518490_minor 0 4.00G 0 0 > 0 > devfsadm_event_channel 1 101 0 1 > 0 > devfsadm_event_channel 1 2 0 1 > 0 > syseventd_channel 2 101 0 2 > 0 > syseventd_channel 1 2 0 1 > 0 > syseventconfd_door 0 101 0 0 > 0 > syseventconfd_door 1 2 0 1 > 0 > dtrace 68 4.00G 0 48174 > 0 > dtrace_minor 0 4.00G 0 0 > 0 > ipf_minor 0 4.00G 0 0 > 0 > ipmi_id_space 2 127 0 5 > 0 > eventfd_minor 74 4.00G 0 236 > 0 > logdmux_minor 0 256 0 0 > 0 > ptms_minor 3 16 0 3 > 0 > Client_id_space 0 128K 0 0 > 0 > ClntIP_id_space 0 1M 0 0 > 0 > OpenOwner_id_space 0 1M 0 0 > 0 > OpenStateID_id_space 0 1M 0 0 > 0 > LockStateID_id_space 0 1M 0 0 > 0 > Lockowner_id_space 0 1M 0 0 > 0 > DelegStateID_id_space 0 1M 0 0 > 0 > shmids 0 64 0 97 > 0 > ------------------------------ --------- ---------- --------- ---------- > ----- > richard@omnis:/home/richard/src/illumos-gate$ kstat -n process_cache > module: unix instance: 0 > name: process_cache class: kmem_cache > align 8 > alloc 520455 > alloc_fail 0 > buf_avail 0 > buf_constructed 0 > buf_inuse 92 > buf_max 1758 > buf_size 3920 > buf_total 92 > chunk_size 3920 > crtime 47,305215503 > defrag 0 > depot_alloc 0 > depot_contention 0 > depot_free 0 > empty_magazines 0 > free 520363 > full_magazines 0 > hash_lookup_depth 96862 > hash_rescale 9 > hash_size 128 > magazine_size 0 > move_callbacks 0 > move_dont_know 0 > move_dont_need 0 > move_hunt_found 0 > move_later 0 > move_no 0 > move_reclaimable 0 > move_slabs_freed 0 > move_yes 0 > reap 0 > scan 0 > slab_alloc 520455 > slab_create 520455 > slab_destroy 520363 > slab_free 520363 > slab_size 4096 > snaptime 8897,308415399 > vmem_source 18 ?? notice the slab stats? Nobody else sees this? -- Richard PALO ------------------------------------------- illumos-discuss Archives: https://www.listbox.com/member/archive/182180/=now RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be Modify Your Subscription: https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4 Powered by Listbox: http://www.listbox.com
