> If you repeat that lockstat, does the result look
> similar?
> cpu usage by "cpu[0]+4", in tsc_read(),
> ddi_mem_get32(),
> tsc_gethrtime(), ...drv_usecwait() ?
>
>
> Maybe we can find out who's calling drv_usecwait(),
> using:
> lockstat -kIW -f drv_usecwait -s 10 sleep 15
Okay, think, I caught them all here:
First the two at sanity (~0% CPU):
# lockstat -kIW -f drv_usecwait -s 10 sleep 15
Profiling interrupt: 1 events in 15.041 seconds (0 events/sec)
-------------------------------------------------------------------------------
Count indv cuml rcnt nsec Hottest CPU+PIL Caller
1 100% 100% 0.00 1246 cpu[0] drv_usecwait
nsec ------ Time Distribution ------ count Stack
2048 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1 ec_wait_ibf_clear
ec_rd
ec_handler
AcpiEvAddressSpaceDispatch
AcpiExAccessRegion
AcpiExFieldDatumIo
AcpiExExtractFromField
AcpiExReadDataFromField
AcpiExResolveNodeToValue
-------------------------------------------------------------------------------
# lockstat -kIW -D 20 sleep 15
Profiling interrupt: 2918 events in 15.042 seconds (194 events/sec)
Count indv cuml rcnt nsec Hottest CPU+PIL Caller
-------------------------------------------------------------------------------
2896 99% 99% 0.00 3174 cpu[1] i86_mwait
12 0% 100% 0.00 3050 cpu[0] (usermode)
2 0% 100% 0.00 2757 cpu[0] mutex_enter
1 0% 100% 0.00 1944 cpu[1]+11 savectx
1 0% 100% 0.00 1886 cpu[1] cv_broadcast
1 0% 100% 0.00 4440 cpu[1] page_get_mnode_freelist
1 0% 100% 0.00 1777 cpu[1] bt_getlowbit
1 0% 100% 0.00 3452 cpu[0] hwblkpagecopy
1 0% 100% 0.00 3109 cpu[0]+5 ddi_mem_put8
1 0% 100% 0.00 3844 cpu[0] _sys_sysenter_post_swapgs
1 0% 100% 0.00 1414 cpu[0]+2 dtrace_dynvar_clean
-------------------------------------------------------------------------------
The first command usually returned nothing; I ran it around 10 times until I
got that output above.
Next, the two at ~50% CPU use:
# lockstat -kIW -D 20 sleep 15
Profiling interrupt: 3268 events in 16.849 seconds (194 events/sec)
Count indv cuml rcnt nsec Hottest CPU+PIL Caller
-------------------------------------------------------------------------------
1601 49% 49% 0.00 1098 cpu[1]+9 i86_mwait
781 24% 73% 0.00 881 cpu[0]+4 tsc_read
315 10% 83% 0.00 531420 cpu[0]+4 ddi_getl
245 7% 90% 0.00 871 cpu[0]+4 tsc_gethrtime
136 4% 94% 0.00 864 cpu[0]+4 mul32
83 3% 97% 0.00 860 cpu[0]+4 gethrtime
73 2% 99% 0.00 869 cpu[0]+4 drv_usecwait
8 0% 99% 0.00 75265 cpu[1] (usermode)
4 0% 99% 0.00 1023 cpu[1]+9 mutex_delay_default
3 0% 99% 0.00 2278 cpu[0]+4 do_splx
3 0% 100% 0.00 1653 cpu[0] AcpiUtDebugPrint
1 0% 100% 0.00 3645 cpu[1]+9 as_segcompar
1 0% 100% 0.00 1710 cpu[1]+9 avl_find
1 0% 100% 0.00 3877 cpu[1]+9 page_lookup_create
1 0% 100% 0.00 976 cpu[1]+9 default_lock_delay
1 0% 100% 0.00 3036 cpu[1]+9 mutex_enter
1 0% 100% 0.00 3232 cpu[1]+9 inb
1 0% 100% 0.00 1633692 cpu[1]+9 ddi_io_put32
1 0% 100% 0.00 951528 cpu[1]+9 ddi_getw
1 0% 100% 0.00 1419253 cpu[1] ddi_getb
-------------------------------------------------------------------------------
# lockstat -kIW -f drv_usecwait -s 10 sleep 15
Profiling interrupt: 88 events in 16.823 seconds (5 events/sec)
-------------------------------------------------------------------------------
Count indv cuml rcnt nsec Hottest CPU+PIL Caller
86 98% 98% 0.00 867 cpu[0]+4 drv_usecwait
nsec ------ Time Distribution ------ count Stack
1024 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 76 bge_poll_firmware
2048 |@@@ 10 bge_chip_reset
bge_reset
bge_restart
bge_chip_factotum
av_dispatch_softvect
dispatch_softint
switch_sp_and_call
-------------------------------------------------------------------------------
Count indv cuml rcnt nsec Hottest CPU+PIL Caller
2 2% 100% 0.00 1023 cpu[0]+4 drv_usecwait
nsec ------ Time Distribution ------ count Stack
1024 |@@@@@@@@@@@@@@@ 1 bge_chip_poll_engine
2048 |@@@@@@@@@@@@@@@ 1 bge_chip_enable_engine
bge_chip_start
bge_start
bge_restart
bge_chip_factotum
av_dispatch_softvect
dispatch_softint
switch_sp_and_call
-------------------------------------------------------------------------------
And dead we went ...
This hopefully should give the whole picture?
--
This message posted from opensolaris.org