Re: [dtrace-discuss] fbt probes creation issue

2008-10-23 Thread Aubrey Li
On Thu, Oct 23, 2008 at 10:39 PM, Adam Leventhal <[EMAIL PROTECTED]> wrote:
> On Oct 23, 2008, at 7:22 AM, Aubrey Li wrote:
>>
>> But how are the fbt probes created for the modules which are loaded
>>  before fbt
>> module is loaded?
>
> It happens in the call to dtrace_open() which calls dtrace_probe_provide().
> This iterates over all providers and calls its dtps_provide_module callback
> for every module.
>
Thanks, that exactly answered my question.

Regards,
-Aubrey
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


[dtrace-discuss] fbt probes creation issue

2008-10-23 Thread Aubrey Li
Hi list,

>From the fbt driver, it's easy to understand the fbt probes are
created after fbt module
is loaded, fbt_provide_module will be called when a module is loaded and create
the fbt probes for all the functions.

But how are the fbt probes created for the modules which are loaded  before fbt
module is loaded?

There must be some magic I missed.
Could you please shed some light on me?

Thanks,
-Aubrey
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] libdtrace and xcalls

2008-05-19 Thread Aubrey Li
On Tue, May 20, 2008 at 12:26 AM, Bryan Cantrill <[EMAIL PROTECTED]> wrote:
>
>> I've got an app that periodically runs dtrace scripts through libdtrace
>> and am seeing a good amount of xcalls from that.
>>
>> I understand why dtrace uses xcalls, so this is an expected behavior to
>> a certain extent. But because my app repeats the scripts on an interval
>> basis, the amount of xcalls is higher than I'd like it to be.
>>
>> Is there anything I can do to reduce that ?
>
> Yes:  you can slow down your various rates (switchrate, aggrate, etc.)
> But the xcalls are probably more cosmetic than an actual problem:  DTrace's
> xcalls execute very quickly, and are unlikely to be causing system-wide
> degradation.  (This is true of essentially all xcalls -- xcalls are more
> likely to be a symptom of a larger problem than a problem in and of
> themselves.)  Which raises a question:  how many xcalls are you seeing,
> and have you used DTrace to make sure that they're all coming from DTrace?
>

I'm not sure how many amount of xcalls is higher than you expect.
You could find the root cause by:
# dtrace -n 'xcalls [EMAIL PROTECTED]()]=count()]}'

Thanks,
-Aubrey
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] LatencyTop

2008-01-21 Thread Aubrey Li
On Jan 19, 2008 9:04 AM, Colin Burgess <[EMAIL PROTECTED]> wrote:
> Well I see that Brendan did reply to the OSNews link to this.  He basically
> shot them down at hardcoding the instrumentation - as he should have! :-)
>
> Shame on Intel - they should know better!
>
> Colin
>
I'm not a member of the linux LatencyTOP team.
I haven't gotten a chance to see how is this tool implemented.
But I totally disagree with you about it's a shame.
Regardless of internal implementation, It's interesting and be very useful.
At least, for end users, it helps to visualize system latencies.

-Aubrey
Intel OpenSolaris Team
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] LatencyTop

2008-01-21 Thread Aubrey Li
On Jan 19, 2008 10:30 AM, Roman Shaposhnik <[EMAIL PROTECTED]> wrote:
> On Fri, 2008-01-18 at 16:33 -0500, Colin Burgess wrote:
> > I see Intel has released a new tool.  Oh, it requires some patches to
> > the kernel to record
> > latency times.  Good thing people don't mind patching their kernels, eh?
> >
> > So who can write the equivalent latencytop.d the fastest? ;-)
> >
> > http://www.latencytop.org/
>
> What I find interesting about these projects that Intel
> spawns for Linux (PowerTOP, LatencyTOP and couple of others)
> is that regardless of internal implementation they are
> very useful end user tools. Here at Sun we seem to be
> missing interest in creating such things. Which is a bit of a
> shame. They are ideal vehicles for disseminating DTrace
> knowledge and exposing neophytes to the raw power of DTrace.
>
> To be fair Greg's DTrace toolkit helps in that respect, but
> still it sets the bar pretty high for anybody who would
> like to use it.
>
> It is easy to poke fun at LatencyTOP, but asking the right
> question could sometimes be even more important than
> being able to deliver the answer.
>
> Just my 2c.
>
> Thanks,
> Roman.
>
> P.S. I was able to extend the battery time of my Linux laptop
> 1.5x using PowerTOP. Can the same thing be done with DTrace?
> Perhaps it can, but I don't think I can code it up.

Solaris PowerTOP is almost done.
http://www.opensolaris.org/os/project/tesla/Work/Powertop/

-Aubrey
Intel OpenSolaris Team
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] dtrace wired issue

2008-01-15 Thread Aubrey Li
On Jan 15, 2008 4:43 PM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Aubrey Li wrote:
> > On Jan 15, 2008 2:45 PM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> >
> >> Hi Aubrey,
> >>
> >>> ---snip--
> >>> setrlimit(RLIMIT_NOFILE, 0xFD7FFFDFEE80)= 0
> >>> openat(AT_FDCWD, "/dev/dtrace/provider", O_RDONLY|O_NDELAY|O_LARGEFILE) = 
> >>> 3
> >>> fcntl(3, F_SETFD, 0x0001)   = 0
> >>> fstat(3, 0xFD7FFFDFE8E0)= 0
> >>> getdents(3, 0xFD7FFF174000, 8192) (sleeping...)
> >>> ---here, dtrace sleep almost 4 minutes
> >>> ...and continue...
> >>>
> >>>
> >>>
> >> How long does an ls -l /dev/dtrace/provider take?
> >>
> >
> > Yes, it costs almost 4 minutes.
> >
> >
> So, what does ls -l /dev/dtrace/provider show?
> max
>
>

# ls -l /dev/dtrace/provider
total 14
lrwxrwxrwx   1 root root  43 Sep  3 01:21 fasttrap ->
../../../devices/pseudo/[EMAIL PROTECTED]:fasttrap
lrwxrwxrwx   1 root root  33 Sep  3 01:21 fbt ->
../../../devices/pseudo/[EMAIL PROTECTED]:fbt
lrwxrwxrwx   1 root root  43 Sep  3 01:21 lockstat ->
../../../devices/pseudo/[EMAIL PROTECTED]:lockstat
lrwxrwxrwx   1 root root  49 Sep  3 01:21 lx_systrace ->
../../../devices/pseudo/[EMAIL PROTECTED]:lx_systrace
lrwxrwxrwx   1 root root  41 Sep  3 01:21 profile ->
../../../devices/pseudo/[EMAIL PROTECTED]:profile
lrwxrwxrwx   1 root root  33 Sep  3 01:21 sdt ->
../../../devices/pseudo/[EMAIL PROTECTED]:sdt
lrwxrwxrwx   1 root root  43 Sep  3 01:21 systrace ->
../../../devices/pseudo/[EMAIL PROTECTED]:systrace

err, now I got something interesting.
It looks like this wired issue is caused by the ata driver bug
which I reported on the opensolaris-bug mailing list.

When I switch the system to the console mode(X is disabled), I got
#dtrace -l
snip
scsi: WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL PROTECTED] 
(ata1):
   timeout: abort request, target=0 lun=0
scsi: WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL PROTECTED] 
(ata1):
   timeout: abort device, target=0 lun=0
scsi: WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL PROTECTED] 
(ata1):
   timeout: reset target, target=0 lun=0
scsi: WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL PROTECTED] 
(ata1):
   timeout: reset bus, target=0 lun=0
ID   PROVIDERMODULE  FUNCTION NAME
1 dtrace BEGIN
2 dtrace END
3 dtrace ERROR
4   lockstat   genunix   mutex_enter
adaptive-acquire
5   lockstat   genunix   mutex_enter
adaptive-block
6   lockstat   genunix   mutex_enter
adaptive-spin
7   lockstat   genunixmutex_exit
adaptive-release
8   lockstat   genunix mutex_destroy
adaptive-release
9   lockstat   genunixmutex_tryenter
adaptive-acquire
 snip

-Aubrey
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] dtrace wired issue

2008-01-15 Thread Aubrey Li
On Jan 15, 2008 2:45 PM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Hi Aubrey,
>> ---snip--
> > setrlimit(RLIMIT_NOFILE, 0xFD7FFFDFEE80)= 0
> > openat(AT_FDCWD, "/dev/dtrace/provider", O_RDONLY|O_NDELAY|O_LARGEFILE) = 3
> > fcntl(3, F_SETFD, 0x0001)   = 0
> > fstat(3, 0xFD7FFFDFE8E0)= 0
> > getdents(3, 0xFD7FFF174000, 8192) (sleeping...)
> > ---here, dtrace sleep almost 4 minutes
> > ...and continue...
> >
> >
> How long does an ls -l /dev/dtrace/provider take?

Yes, it costs almost 4 minutes.

> I doubt you are swapping with 2G of memory.
> You can run vmstat -S 2  to see if there is any swapping.  I'll look a
> bit more...

# vmstat -S 2
 kthr  memorypagedisk  faults  cpu
 r b w   swap  free  si  so pi po fr de sr cd -- -- --   in   sy   cs us sy id
 0 0 0 1601116 1352560 0  0 717 2 53  0 2502 71 0 0  0  718 7222 2813  4  3 93
 0 0 0 1516012 1236776 0  0  0  0  0  0  0  0  0  0  0  388  269  347  0  0 99
 0 0 0 1516012 1236808 0  0  0  0  0  0  0  0  0  0  0  394  262  354  0  0 100
 0 0 0 1516012 1236808 0  0  0  0  0  0  0  0  0  0  0  392  255  356  0  0 100
 0 0 0 1516016 1236812 0  0  0  0  0  0  0 23  0  0  0  425  380  396  1  0 99
 0 0 0 1516016 1236812 0  0  0  0  0  0  0  0  0  0  0  389  294  332  0  0 100
 0 0 0 1516016 1236812 0  0  0  0  0  0  0  0  0  0  0  382  466  444  0  0 99
 0 0 0 1516016 1236812 0  0  0  0  0  0  0  0  0  0  0  394  273  346  0  0 100
 0 0 0 1516016 1236812 0  0  0  0  0  0  0  0  0  0  0  387  250  328  0  0 99
 0 0 0 1516016 1236812 0  0  0  0  0  0  0  0  0  0  0  384  264  339  0  0 100

> By the way, are you running on Indiana (just curious...).
>

No, I'm using nevada.

[EMAIL PROTECTED] /etc/release
   Solaris Express Community Edition snv_77 X86
   Copyright 2007 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
   Assembled 06 November 2007

Thanks,
-Aubrey
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] dtrace wired issue

2008-01-14 Thread Aubrey Li
Sorry for the delay(time difference). Now I got more details.
# truss dtrace -l
snip
mmap(0x, 4096, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON, 4294967295, 0) = 0xFD7FFF18
mmap(0x0001, 24576, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON|MAP_ALIGN, -1, 0) = 0xFD7FFF17
munmap(0xFD7FFF38, 32768)   = 0
getcontext(0xFD7FFFDFE9D0)
mmap(0x0001, 65536, PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANON|MAP_ALIGN, 4294967295, 0) = 0xFD7FFF15
getrlimit(RLIMIT_STACK, 0xFD7FFFDFED30) = 0
getpid()= 923 [922]
lwp_private(0, 0, 0xFD7FFF150200)   = 0x
setustack(0xFD7FFF1502A8)
sysconfig(_CONFIG_PAGESIZE) = 4096
sigfillset(0xFD7FFF006880)  = 0
brk(0x0041B580) = 0
brk(0x0041F580) = 0
getrlimit(RLIMIT_NOFILE, 0xFD7FFFDFEE80)= 0
setrlimit(RLIMIT_NOFILE, 0xFD7FFFDFEE80)= 0
openat(AT_FDCWD, "/dev/dtrace/provider", O_RDONLY|O_NDELAY|O_LARGEFILE) = 3
fcntl(3, F_SETFD, 0x0001)   = 0
fstat(3, 0xFD7FFFDFE8E0)= 0
getdents(3, 0xFD7FFF174000, 8192) (sleeping...)
---here, dtrace sleep almost 4 minutes
...and continue...


Any thoughts?

Thanks,
-Aubrey
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] dtrace wired issue

2008-01-14 Thread Aubrey Li
On Jan 15, 2008 12:44 AM, Adam Leventhal <[EMAIL PROTECTED]> wrote:
> On Mon, Jan 14, 2008 at 12:52:55PM +, Sean McGrath - Sun Microsystems 
> Ireland wrote:
> > Aubrey Li stated:
> > < Every first time to run dtrace command after the system boot up,
> > < It takes a very long time to get response.
> > < But the second time is OK, as follows:
> > <
> > < # time dtrace -l > /dev/null
> > <
> > < real4m8.011s
> > < user0m0.116s
> > < sys 0m2.420s
> >
> >   This first time is probably when the kernel is loading the dtrace modules.
> >   Though still seems slow, 4 minutes.
> > What kind of system (cpu speed etc) is the machine ?
>
> The first time DTrace is loaded it needs to uncompress the CTF (type)
> information for all of the currently loaded kernel modules which can take
> quite a while. Four minutes does seem a bit excessive. Are you seeing any
> swapping?

How to check the swapping?

-Aubrey
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] dtrace wired issue

2008-01-14 Thread Aubrey Li
On Jan 14, 2008 10:46 PM, Bryan Cantrill <[EMAIL PROTECTED]> wrote:
>
> On Mon, Jan 14, 2008 at 09:16:34PM +0800, Aubrey Li wrote:
> > On Jan 14, 2008 8:52 PM, Sean McGrath - Sun Microsystems Ireland
> > <[EMAIL PROTECTED]> wrote:
> > > Aubrey Li stated:
> > > < Every first time to run dtrace command after the system boot up,
> > > < It takes a very long time to get response.
> > > < But the second time is OK, as follows:
> > > <
> > > < # time dtrace -l > /dev/null
> > > <
> > > < real4m8.011s
> > > < user0m0.116s
> > > < sys 0m2.420s
> > >
> > >   This first time is probably when the kernel is loading the dtrace 
> > > modules.
> > >   Though still seems slow, 4 minutes.
> > > What kind of system (cpu speed etc) is the machine ?
> >
> > # psrinfo -vp
> > The physical processor has 2 virtual processors (0 1)
> >   x86 (GenuineIntel 10674 family 6 model 23 step 4 clock 2400 MHz)
> > Intel(r) CPU  @ 2.40GHz
> >
> > So, I failed to understand the modules loading needs 4 minutes.
>
> Yes, this is definitely fishy.  Is this a highly memory constrained system?

# prtconf -vp | grep Mem
Memory size: 2023 Megabytes

> If you "modunload -i 0" enough times to get dtrace(7D) unloaded (that
> is, "dtrace" doesn't appear in modinfo), does it again take 4 minutes?
> As you can imagine, it's a little tough to investigate this problem because
> we can't use DTrace to do it! ;)

This doesn't work on my side. modinfo always shows me "dtrace".

>
> > > < # time dtrace -l > /dev/null
> > > <
> > > < real0m0.632s
> > > < user0m0.075s
> > > < sys 0m0.553s
>
> And 600+ milliseconds is still a long time.  How many probes are we talking
> about here?

# dtrace -l | wc -l
   52604

-Aubrey
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] dtrace wired issue

2008-01-14 Thread Aubrey Li
On Jan 14, 2008 9:38 PM, James C. McPherson <[EMAIL PROTECTED]> wrote:
> Aubrey Li wrote:
> > On Jan 14, 2008 9:26 PM, Aubrey Li <[EMAIL PROTECTED]> wrote:
> >> On Jan 14, 2008 9:22 PM, James C. McPherson <[EMAIL PROTECTED]> wrote:
> 
> >>> If you run "dtrace -l" with no args, *every* single loadable
> >>> module on the system will be loaded, interrogated by dtrace
> >>> and then unloaded if possible.
> >>>
> >>> All those attach()es and detach()es need time, as does the
> >>> probe collation.
> >>>
> >> So may I ask, how long "dtrace -l" get response on your system?
> >> And how fast the cpu speed on your system?
> >>
> >> 4 minutes, it is absolutely acceptable for me.
> >>
> > err, sorry, I mean unacceptable, ;-)
>
> (figured that :)
>
> *** On a u20m2
>
> $ uname -a
> SunOS farnarkle 5.11 snv_77 i86pc i386 i86pc
> $ su root -c 'time dtrace -l > /dev/null'
> Password:
>
> real2.2
> user0.1
> sys 1.0
> $ psrinfo -v
> Status of virtual processor 0 as of: 01/14/2008 23:32:17
>on-line since 01/08/2008 09:11:53.
>The i386 processor operates at 2200 MHz,
>  and has an i387 compatible floating point processor.
> Status of virtual processor 1 as of: 01/14/2008 23:32:17
>on-line since 01/08/2008 09:11:56.
>The i386 processor operates at 2200 MHz,
>  and has an i387 compatible floating point processor.
>
>
>
> *** on a u20
>
> # time dtrace -l > /dev/null
>
> real0m0.219s
> user0m0.063s
> sys 0m0.089s
> # psrinfo -v
> Status of virtual processor 0 as of: 01/14/2008 23:33:12
>on-line since 01/14/2008 22:43:46.
>The i386 processor operates at 2412 MHz,
>  and has an i387 compatible floating point processor.
> Status of virtual processor 1 as of: 01/14/2008 23:33:12
>on-line since 01/14/2008 22:43:50.
>The i386 processor operates at 2412 MHz,
>  and has an i387 compatible floating point processor.
>
> # uname -a
> SunOS crashburn 5.11 snv_80 i86pc i386 i86pc
>
>
>
>
>
> *** on a u60
>
> # time dtrace -l > /dev/null
>
> real4.4
> user0.9
> sys 3.2
> # psrinfo -v
> Status of virtual processor 0 as of: 01/14/2008 23:47:33
>on-line since 09/14/2007 21:44:39.
>The sparcv9 processor operates at 296 MHz,
>  and has a sparcv9 floating point processor.
> Status of virtual processor 2 as of: 01/14/2008 23:47:33
>on-line since 09/14/2007 21:44:41.
>The sparcv9 processor operates at 296 MHz,
>  and has a sparcv9 floating point processor.
>
> # uname -a
> SunOS synergise 5.11 snv_71 sun4u sparc SUNW,Ultra-60
>
>
>
>
> You seem to be convinced that cpu is the limiting
> factor. Why?

No, I want to say, if your cpu is not faster than mine,
but dtrace response is faster than mine, then your explanation
snip
All those attach()es and detach()es need time, as does the
probe collation.
snip

is not reasonable.

So, there must be something wrong on my system,
but I can't figure it out so far.

Thanks,
-Aubrey
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] dtrace wired issue

2008-01-14 Thread Aubrey Li
On Jan 14, 2008 9:26 PM, Aubrey Li <[EMAIL PROTECTED]> wrote:
>
> On Jan 14, 2008 9:22 PM, James C. McPherson <[EMAIL PROTECTED]> wrote:
> > Aubrey Li wrote:
> > > On Jan 14, 2008 8:52 PM, Sean McGrath - Sun Microsystems Ireland
> > > <[EMAIL PROTECTED]> wrote:
> > >> Aubrey Li stated:
> > >> < Every first time to run dtrace command after the system boot up,
> > >> < It takes a very long time to get response.
> > >> < But the second time is OK, as follows:
> > >> <
> > >> < # time dtrace -l > /dev/null
> > >> <
> > >> < real4m8.011s
> > >> < user0m0.116s
> > >> < sys 0m2.420s
> > >>
> > >>   This first time is probably when the kernel is loading the dtrace 
> > >> modules.
> > >>   Though still seems slow, 4 minutes.
> > >> What kind of system (cpu speed etc) is the machine ?
> > >
> > > # psrinfo -vp
> > > The physical processor has 2 virtual processors (0 1)
> > >   x86 (GenuineIntel 10674 family 6 model 23 step 4 clock 2400 MHz)
> > > Intel(r) CPU  @ 2.40GHz
> > >
> > > So, I failed to understand the modules loading needs 4 minutes.
> >
> >
> > If you run "dtrace -l" with no args, *every* single loadable
> > module on the system will be loaded, interrogated by dtrace
> > and then unloaded if possible.
> >
> > All those attach()es and detach()es need time, as does the
> > probe collation.
> >
> So may I ask, how long "dtrace -l" get response on your system?
> And how fast the cpu speed on your system?
>
> 4 minutes, it is absolutely acceptable for me.
>
err, sorry, I mean unacceptable, ;-)

-Aubrey
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] dtrace wired issue

2008-01-14 Thread Aubrey Li
On Jan 14, 2008 9:22 PM, James C. McPherson <[EMAIL PROTECTED]> wrote:
> Aubrey Li wrote:
> > On Jan 14, 2008 8:52 PM, Sean McGrath - Sun Microsystems Ireland
> > <[EMAIL PROTECTED]> wrote:
> >> Aubrey Li stated:
> >> < Every first time to run dtrace command after the system boot up,
> >> < It takes a very long time to get response.
> >> < But the second time is OK, as follows:
> >> <
> >> < # time dtrace -l > /dev/null
> >> <
> >> < real4m8.011s
> >> < user0m0.116s
> >> < sys 0m2.420s
> >>
> >>   This first time is probably when the kernel is loading the dtrace 
> >> modules.
> >>   Though still seems slow, 4 minutes.
> >> What kind of system (cpu speed etc) is the machine ?
> >
> > # psrinfo -vp
> > The physical processor has 2 virtual processors (0 1)
> >   x86 (GenuineIntel 10674 family 6 model 23 step 4 clock 2400 MHz)
> > Intel(r) CPU  @ 2.40GHz
> >
> > So, I failed to understand the modules loading needs 4 minutes.
>
>
> If you run "dtrace -l" with no args, *every* single loadable
> module on the system will be loaded, interrogated by dtrace
> and then unloaded if possible.
>
> All those attach()es and detach()es need time, as does the
> probe collation.
>
So may I ask, how long "dtrace -l" get response on your system?
And how fast the cpu speed on your system?

4 minutes, it is absolutely acceptable for me.

Thanks,
-Aubrey
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] dtrace wired issue

2008-01-14 Thread Aubrey Li
On Jan 14, 2008 8:52 PM, Sean McGrath - Sun Microsystems Ireland
<[EMAIL PROTECTED]> wrote:
> Aubrey Li stated:
> < Every first time to run dtrace command after the system boot up,
> < It takes a very long time to get response.
> < But the second time is OK, as follows:
> <
> < # time dtrace -l > /dev/null
> <
> < real4m8.011s
> < user0m0.116s
> < sys 0m2.420s
>
>   This first time is probably when the kernel is loading the dtrace modules.
>   Though still seems slow, 4 minutes.
> What kind of system (cpu speed etc) is the machine ?

# psrinfo -vp
The physical processor has 2 virtual processors (0 1)
  x86 (GenuineIntel 10674 family 6 model 23 step 4 clock 2400 MHz)
Intel(r) CPU  @ 2.40GHz

So, I failed to understand the modules loading needs 4 minutes.

>
> < # time dtrace -l > /dev/null
> <
> < real0m0.632s
> < user0m0.075s
> < sys 0m0.553s
> <
>
>  And the second time, the dtrace modules are already loaded so less time.
>

This makes sense to me.

> < Any clue?
> <
> < Thanks,
> < -Aubrey
>
> Regards,
> --
> Sean.
> .
>

Thanks,
-Aubrey
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


[dtrace-discuss] dtrace wired issue

2008-01-14 Thread Aubrey Li
Every first time to run dtrace command after the system boot up,
It takes a very long time to get response.
But the second time is OK, as follows:

# time dtrace -l > /dev/null

real4m8.011s
user0m0.116s
sys 0m2.420s
# time dtrace -l > /dev/null

real0m0.632s
user0m0.075s
sys 0m0.553s

Any clue?

Thanks,
-Aubrey
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


[dtrace-discuss] How to implement "%a" in C program?

2007-11-05 Thread Aubrey Li
Hi all,

Here is a fragment dtrace script, I included it in my C program.
static const char *g_prog =
"sdt:::callout-start"
"{"
"@callouts[((callout_t *)arg0)->c_func] = count();"
"}"

I know
"printa("%40a [EMAIL PROTECTED]", @callout)"
will print "module`symbol-name" and count.

But how to achieve this in C program in libdtrace?
Is there any existing examples?

I've read intrstat.c. I know I can get the name of
 "stringof(`devnamesp[this->devi->devi_major].dn_name)"
by
{
  dtrace_agggdata_t *data;
  dtrace_recdesc_t *nrec;
  name = data->dtada_data + nrec->dtrd_offset;
}

Is there any docs about libdtrace programming?

Many thanks,
-Aubrey
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org