Hi Gino,

Could you login as root, and run
    # mdb -k

At mdb prompt, type
    > ::cpuinfo

One of the CPUs is probably running sched.  Use the value under heading
"THREAD" for the CPU, and do

    > put-your-thread-value-here::findstack -v

For example,

    > fffffffed378a760::findstack -v

Please let me know what the stack trace says.  Could you also send me
the output of "psrinfo -vp"?

The symptom of the problem you are seeing is similar to that with
    6563696 cpu idle loop takes up 100% system time on a freshly booted
            wolf blade
But I wouldn't know for sure until I have more information.

Sherry

On Thu, Jul 12, 2007 at 08:18:16AM -0700, Gino wrote:
> Hi All,
> 
> snv67, 4xOpteron.
> We have some performance problems on this server. 
> In particular mpstat is reporting CPU#1 at 93% sys.
> Is there a way to find out what that cpu is doing? 
> 
> svr11@/test# lockstat -kIW -D 20 sleep 30
> 
> Profiling interrupt: 11648 events in 30.026 seconds (388 events/sec)
> 
> Count indv cuml rcnt     nsec Hottest CPU+PIL        Caller
> -------------------------------------------------------------------------------
>  5710  49%  49% 0.00      855 cpu[0]                 mach_cpu_idle
>  1614  14%  63% 0.00     4278 cpu[3]                 (usermode)
>   862   7%  70% 0.00     1072 cpu[1]                 do_splx
>   858   7%  78% 0.00     1126 cpu[1]+11              splr
>   267   2%  80% 0.00      767 cpu[1]                 disp_getwork
>   204   2%  82% 0.00     3446 cpu[3]                 vmu_calculate_seg
>   195   2%  83% 0.00     3331 cpu[3]                 page_exists
>   192   2%  85% 0.00     2967 cpu[3]                 i_mod_hash_find_nosync
>   157   1%  86% 0.00      768 cpu[1]                 swtch
>   147   1%  88% 0.00      754 cpu[1]                 new_cpu_mstate
>   135   1%  89% 0.00      767 cpu[1]                 disp
>    88   1%  90% 0.00      754 cpu[1]                 tsc_gethrtimeunscaled
>    70   1%  90% 0.00     4373 cpu[3]                 lzjb_compress
>    67   1%  91% 0.00     4059 cpu[3]                 mutex_enter
>    64   1%  91% 0.00      797 cpu[1]+11              lock_set_spl
>    60   1%  92% 0.00      820 cpu[1]+11              disp_lock_exit
>    49   0%  92% 0.00      742 cpu[1]                 idle_enter
>    35   0%  92% 0.00      733 cpu[1]                 idle_exit
>    32   0%  93% 0.00      989 cpu[1]                 disp_lock_enter
>    26   0%  93% 0.00     4419 cpu[3]                 fsflush_do_pages
> -------------------------------------------------------------------------------
> svr11@/test# mpstat 1
> CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
>   0    0   0    0  2224 1220    0    0    0    2    0     0    0   1   0  99
>   1   13   0    0   283  278   21    0    0    2    0    59    0  93   0   7
>   2    0   0    0    84   72   15    0    0    2    0    13    0   0   0 100
>   3 1389   0   10   872    4 1201  312    0   12    0  6658   22  77   0   1
> CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
>   0    0   0    0  2455 1375    0    0    0   13    0     0    0   1   0  99
>   1 3064   0  221   141  122  334   18   13    7    0  5614   11  84   0   5
>   2    0   0    0   286  198   14    0    0    0    0     5    0   0   0 100
>   3  102   0   18   627    2 1579  541    0   62    0 279482   28  66   0   7
> CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
>   0    0   0    0  2349 1305    0    0    0    7    0     0    0   1   0  99
>   1   28   0    9    46    3  108    0    0    0    0   487    0  93   0   7
>   2  297   0   27   288  235  226    2    1    2    0   850    1   1   0  98
>   3 9107   0   99   270    4  996  280    0   17    0 108063   61  38   0   1
> 
> tnx,
> gino
>  
>  
> This message posted from opensolaris.org
> _______________________________________________
> opensolaris-discuss mailing list
> opensolaris-discuss@opensolaris.org

-- 
Sherry Moore, Solaris Kernel Development        http://blogs.sun.com/sherrym
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to