> > Ok, for system cpu time usage: try to run a kernel
> > profile, to find out what kernel functions are consuming
> > the time, lockstat -kIW -D 20 sleep 15
>
> I did one on the machine, and then quickly an ssh and
> another one in ssh for the screenshot:
>
> # lockstat -kIW -D 20 sleep 15
>
> Profiling interrupt: 3074 events in 15.841 seconds
> (194 events/sec)
>
> Count indv cuml rcnt nsec Hottest CPU+PIL Caller
> -------------------------------------------------------------------------------
> 2430 79% 79% 0.00 2682 cpu[0] i86_mwait
> 279 9% 88% 0.00 1364 cpu[0]+4 tsc_read
> 113 4% 92% 0.00 554980 cpu[0]+4 ddi_mem_get32
> 103 3% 95% 0.00 1437 cpu[0]+4 tsc_gethrtime
> 53 2% 97% 0.00 1369 cpu[0]+4 mul32
> 35 1% 98% 0.00 1337 cpu[0]+4 gethrtime
> 28 1% 99% 0.00 1379 cpu[0]+4 drv_usecwait
...
> and 10 seconds later it was completely dead.
>
> Does this help, or do you need another one?
Hmm, the 79% i86_mwait() should be 79% idle time.
The rest is ~ 20% cpu time usage for accessing some
memory mapped registers, reading the cpu's time
stamp counter (tsc); on CPU #0 at priority level 4
"cpu[0]+4". Looks like the kernel is busy waiting
for some time using drv_usecwait at priority level 4.
If you repeat that lockstat, does the result look similar?
cpu usage by "cpu[0]+4", in tsc_read(), ddi_mem_get32(),
tsc_gethrtime(), ...drv_usecwait() ?
Maybe we can find out who's calling drv_usecwait(),
using:
lockstat -kIW -f drv_usecwait -s 10 sleep 15
--
This message posted from opensolaris.org