I'm seeing the following libmicro computation regarding mprotect, which
looks fishy to me.  Interesting to note that libmicro eventually
discards all of the results > 100 usecs, leading it to conclude that
this system call is taking 1usec per call.  Plus, there's a very odd
distribution of times.  Which seems out of whack.  Any ideas what is
going on here?

(Playing with libmicro runs to demonstrate the difference between DEBUG
and non-DEBUG kernels).

        -dp

# bin/mprotect -E -C 200 -L -S -W -N mprot_tw128k -l 128k -I 2000 -w -t -f /dev/
zero 
             prc thr   usecs/call      samples   errors cnt/samp     size flags
mprot_tw128k   1   1      1.03400          201        0        1   131072 --w-t
#
# STATISTICS         usecs/call (raw)          usecs/call (outliers removed)
#                    min      0.84400                 0.84400
#                    max    229.14700               140.36000
#                   mean     54.06056                53.18948
#                 median    103.44900                 1.03400
#                 stddev     54.06297                52.75775
#         standard error      3.80386                 3.72124
#   99% confidence level      8.84777                 8.65562
#                   skew      0.14148                 0.02044
#               kurtosis     -1.54415                -1.99042
#       time correlation     -0.01335                -0.02110
#
#           elasped time      0.01238
#      number of samples          201
#     number of outliers            1
#      getnsecs overhead          170
#
# DISTRIBUTION
#             counts   usecs/call                                         means
#                101      0.00000 |********************************     0.90348
#                  0      4.00000 |                                           -
#                  0      8.00000 |                                           -
#                  0     12.00000 |                                           -
#                  0     16.00000 |                                           -
#                  0     20.00000 |                                           -
#                  0     24.00000 |                                           -
#                  0     28.00000 |                                           -
#                  0     32.00000 |                                           -
#                  0     36.00000 |                                           -
#                  0     40.00000 |                                           -
#                  0     44.00000 |                                           -
#                  0     48.00000 |                                           -
#                  0     52.00000 |                                           -
#                  0     56.00000 |                                           -
#                  0     60.00000 |                                           -
#                  0     64.00000 |                                           -
#                  0     68.00000 |                                           -
#                  0     72.00000 |                                           -
#                  0     76.00000 |                                           -
#                  0     80.00000 |                                           -
#                  0     84.00000 |                                           -
#                  0     88.00000 |                                           -
#                  0     92.00000 |                                           -
#                  0     96.00000 |                                           -
#                 22    100.00000 |******                             103.80795
#                 67    104.00000 |*********************              105.83069
#
#                 11        > 95% |***                                111.40036
#
#        mean of 95%     49.81938
#          95th %ile    106.38900



-- 
Daniel Price - Solaris Kernel Engineering - [EMAIL PROTECTED] - blogs.sun.com/dp
_______________________________________________
perf-discuss mailing list
[email protected]

Reply via email to