[dtrace-discuss] Does DTrace have "-xmangled" to support C++? Apple has done that

2008-12-08 Thread Alex Peng
Hi,

To use DTrace on some C++ code, I found that Apple's implementation had another 
option "-xmangled".

Its usage could be found here:
http://blog.mozilla.com/dmandelin/2008/02/14/dtrace-c-mysteries-solved/

And it could be found in Apple's man page:
http://developer.apple.com/documentation/Darwin/Reference/ManPages/man1/dtrace.1.html


Is it possible to have that "mangled" option in DTrace?  You know, some times 
there is no c++filt/dem/gc++filt installed, so it's hard to follow 
http://developers.sun.com/solaris/articles/dtrace_cc.html.

Thanks,
-Alex
-- 
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] How to dig deeper

2008-12-08 Thread Hans-Peter
The buffer cache was already huge.
So I decided not to increase it.
There is a KEEP pool of 5G which is hardly used.
If needed I will sacrifice this cache and add it to the DEFAULT cache.

Until now it looks promising.
The average log file sync wait time has dropped from about 70ms to 7ms.
But we will see how things devellop.

Thanks so far to everybody who helped.

Regards Hans-Peter
-- 
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] Is the nfs dtrace script right (from nfsv3 provider wiki)?

2008-12-08 Thread Marcelo Leal
Hello,

> Are you referring to nfsv3rwsnoop.d?
> 
> The TIME(us) value from that script is not a latency
> measurement,
> it's just a time stamp.
> 
> If you're referring to a different script, let us
> know specifically
> which script.

 Sorry, when i did write "latency", i did assume that you will know that i was 
talking about the "nfsv3rwtime.d" script. Sorry...  i mean, that is the script 
in the wiki page to see the latencies. 
 The:
 "NFSv3 read/write by host (total us):" 
 and
"NFSv3 read/write top 10 files (total us):"

 are showing that numbers...

 Thanks a lot for your answer!

 Leal.
> 
> /jim
> 
> 
> Marcelo Leal wrote:
> > Hello there,
> >  Ten minutes of trace (latency), using the nfs
> dtrace script from nfsv3 provider wiki page, i got
> total numbers (us) like:
> >  131175486635
> >   ???
> >
> >  thanks!
> >   
> ___
> dtrace-discuss mailing list
> dtrace-discuss@opensolaris.org
-- 
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] Is the nfs dtrace script right (from nfsv3 provider wiki)?

2008-12-08 Thread Jim Mauro
Got it.

OK, so you traced for 10 minutes, and dtrace reported a total
value of 131175486635, which we'll round up to 132 billion
microseconds, or (if I'm doing the math right),
132,000 seconds or 2200 minutes. That certainly seems to be
an extraordinarily large value for 10 minutes of data collection, but...

Before we venture further into  this, I need to
know what size machine (how many CPUs, etc) this script was run
on, and I need to see the other output - the read and write latency
quantize graphs. I'm interested in seeing the counts as well as
the latency values, and the file latency summary.


Marcelo Leal wrote:
> Hello,
>
>   
>> Are you referring to nfsv3rwsnoop.d?
>>
>> The TIME(us) value from that script is not a latency
>> measurement,
>> it's just a time stamp.
>>
>> If you're referring to a different script, let us
>> know specifically
>> which script.
>> 
>
>  Sorry, when i did write "latency", i did assume that you will know that i 
> was talking about the "nfsv3rwtime.d" script. Sorry...  i mean, that is the 
> script in the wiki page to see the latencies. 
>  The:
>  "NFSv3 read/write by host (total us):" 
>  and
> "NFSv3 read/write top 10 files (total us):"
>
>  are showing that numbers...
>
>  Thanks a lot for your answer!
>
>  Leal.
>   
>> /jim
>>
>>
>> Marcelo Leal wrote:
>> 
>>> Hello there,
>>>  Ten minutes of trace (latency), using the nfs
>>>   
>> dtrace script from nfsv3 provider wiki page, i got
>> total numbers (us) like:
>> 
>>>  131175486635
>>>   ???
>>>
>>>  thanks!
>>>   
>>>   
>> ___
>> dtrace-discuss mailing list
>> dtrace-discuss@opensolaris.org
>> 
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


[dtrace-discuss] Is there a way to clear Dtrace arrays?

2008-12-08 Thread Henk Vandenbergh
I am using trunc(@agg,0) to clear an aggregation every 'n' seconds. Is there a 
similar function to clear an associative array? I know I can deallocate memory 
for an individual element  by setting is to zero, but I am looking for a way to 
clear ALL elements in an array.
-- 
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] Is the nfs dtrace script right (from nfsv3 provider wiki)?

2008-12-08 Thread Marcelo Leal
Hello Jim,
 - cut here ---
Qui Dez  4 19:08:39 BRST 2008
Qui Dez  4 19:18:02 BRST 2008

 - cut here ---
NFSv3 read/write distributions (us):

  read
   value  - Distribution - count
   2 | 0
   4 | 22108
   8 |@@@  80611
  16 | 66331
  32 |@@   11497
  64 |@4939
 128 | 979
 256 | 727
 512 | 788
1024 | 1663
2048 | 496
4096 |@3389
8192 |@@@  14518
   16384 |@4856
   32768 | 742
   65536 | 119
  131072 | 38
  262144 | 9
  524288 | 25
 1048576 | 7
 2097152 | 0

  write
   value  - Distribution - count
  64 | 0
 128 | 55
 256 |@@   8750
 512 |@@   52926
1024 |@34370
2048 |@@@  24610
4096 |@@@  12136
8192 |@@@  10819
   16384 |@4181
   32768 | 1198
   65536 | 811
  131072 | 793
  262144 | 278
  524288 | 26
 1048576 | 2
 2097152 | 0
 4194304 | 0
 8388608 | 0
16777216 | 0
33554432 | 0
67108864 | 0
   134217728 | 0
   268435456 | 0
   536870912 | 0
  1073741824 | 0
  2147483648 | 0
  4294967296 | 0
  8589934592 | 0
 17179869184 | 0
 34359738368 | 0
 68719476736 | 1
137438953472 | 0

NFSv3 read/write by host (total us):

  x.16.0.x   1987595
  x.16.0.x   2588201
  x.16.0.x  20370903
  x.16.0.x  21400116
  x.16.0.x  25208119
  x.16.0.x  28874221
  x.16.0.x  32523821
  x.16.0.x  41103342
  x.16.0.x  43934153
  x.16.0.x  51819379
  x.16.0.x 57477455
  x.16.0.x  57679165
  x.16.0.x  59575938
  x.16.0.x  95072158
  x.16.0.x 305615207
  x.16.0.x 349252742
  x.16.0.x  131175486635

NFSv3 read/write top 10 files (total us):

  /teste/file1   29942610
  /teste/file2   32180289
  /teste

Re: [dtrace-discuss] Is the nfs dtrace script right (from nfsv3 provider wiki)?

2008-12-08 Thread Marcelo Leal
36 hours... ;-))

 Leal.
-- 
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] Is there a way to clear Dtrace arrays?

2008-12-08 Thread Adam Leventhal
On Dec 8, 2008, at 8:43 AM, Henk Vandenbergh wrote:
> I am using trunc(@agg,0) to clear an aggregation every 'n' seconds.  
> Is there a similar function to clear an associative array? I know I  
> can deallocate memory for an individual element  by setting is to  
> zero, but I am looking for a way to clear ALL elements in an array.

There's not. Can you explain what you're doing a bit so we can  
understand
the use case for such a function?

Adam

--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] Is there a way to clear Dtrace arrays?

2008-12-08 Thread Henk Vandenbergh
I am gathering numerous file-level statistics that I want to report (printf) 
every 'n' seconds, and then start all over again with a new set of statistics. 
I currently accumulate each 'field' in several aggregations, one aggregation 
per field. Every n seconds I dump out the aggregations and then clear them 
using trunc(@xxx,0).
Instead of having one aggregation per field, I would think it would be more 
efficient if I create a struct{} with all the fields, and then store that 
struct in one associative array. However, I can't clear an array.

Henk.
-- 
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] Is there a way to clear Dtrace arrays?

2008-12-08 Thread Henk Vandenbergh
Ignore my request. Even if there was a way to clear an array, there is no 
function that allows me to print out that array. Only an aggregation does that.
-- 
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] lint and statically defined tracing for user applications

2008-12-08 Thread David Bustos
Quoth Adam Leventhal on Fri, Dec 05, 2008 at 05:32:12PM -0800:
> Nice trick. Sounds like a good RFE for dtrace -G would be to generate some
> lint-friendly output.

I filed

6782307 lint complains about user SDT probes

I came up with a better workaround, though: I added

#ifdef lint
/* LINTED */
#define DTRACE_PROBE1(provider, name, arg0)
#endif

to the top of the files defining the probe points.  I suspect this
technique could be used in sdt.h itself.


David
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] Is the nfs dtrace script right (from nfsv3 provider wiki)?

2008-12-08 Thread Jim Mauro
Nasty, nasty outlier.

You have a very nasty outlier in the quantized write latency data;

 34359738368 | 0
 68719476736 | 1
137438953472 | 0


See that sucker with the count of 1? That means there was 1 occurence where
the quantized value fell in the range of 68 billion - 137 billion, and 
your nasty sum
number is 131 billion.

Now, how you ended up with a single write latency value that large is a
bit of a head scratcher. It seems unlikely that you had 1 disk write that
took 36 hours, and given you recorded data for 10 minutes, it seems
highly unlikely that a 36 hour disk write could be captured in a 10 minute
data collection window... :^)

So I need to noodle this a bit more, and see if the DTrace experts have an
opinion. If you can't repeat this (can you?), I would just toss it out as
noise interference. If you can repeat this, we need to look closer..

Thanks,
/jim


Marcelo Leal wrote:
> Hello Jim,
>  - cut here ---
> Qui Dez  4 19:08:39 BRST 2008
> Qui Dez  4 19:18:02 BRST 2008
>
>  - cut here ---
> NFSv3 read/write distributions (us):
>
>   read
>value  - Distribution - count
>2 | 0
>4 | 22108
>8 |@@@  80611
>   16 | 66331
>   32 |@@   11497
>   64 |@4939
>  128 | 979
>  256 | 727
>  512 | 788
> 1024 | 1663
> 2048 | 496
> 4096 |@3389
> 8192 |@@@  14518
>16384 |@4856
>32768 | 742
>65536 | 119
>   131072 | 38
>   262144 | 9
>   524288 | 25
>  1048576 | 7
>  2097152 | 0
>
>   write
>value  - Distribution - count
>   64 | 0
>  128 | 55
>  256 |@@   8750
>  512 |@@   52926
> 1024 |@34370
> 2048 |@@@  24610
> 4096 |@@@  12136
> 8192 |@@@  10819
>16384 |@4181
>32768 | 1198
>65536 | 811
>   131072 | 793
>   262144 | 278
>   524288 | 26
>  1048576 | 2
>  2097152 | 0
>  4194304 | 0
>  8388608 | 0
> 16777216 | 0
> 33554432 | 0
> 67108864 | 0
>134217728 | 0
>268435456 | 0
>536870912 | 0
>   1073741824 | 0
>   2147483648 | 0
>   4294967296 | 0
>   8589934592 | 0
>  17179869184 | 0
>  34359738368 | 0
>  68719476736 | 1
> 137438953472 | 0
>
> NFSv3 read/write by host (total us):
>
>   x.16.0.x   1987595
>   x.16.0.x   2588

Re: [dtrace-discuss] Does DTrace have "-xmangled" to support C++? Apple has done that

2008-12-08 Thread James McIlree

On Dec 8, 2008, at 12:40 AM, Alex Peng wrote:

> Hi,
>
> To use DTrace on some C++ code, I found that Apple's implementation  
> had another option "-xmangled".
>
> Its usage could be found here:
> http://blog.mozilla.com/dmandelin/2008/02/14/dtrace-c-mysteries- 
> solved/
>
> And it could be found in Apple's man page:
> http://developer.apple.com/documentation/Darwin/Reference/ManPages/man1/dtrace.1.html
>
>
> Is it possible to have that "mangled" option in DTrace?  You know,  
> some times there is no c++filt/dem/gc++filt installed, so it's hard  
> to follow http://developers.sun.com/solaris/articles/dtrace_cc.html.
>
> Thanks,
> -Alex


I'd like to see if there were some reasonable way to "auto-detect"  
the mangling and
not have to have a flag at all. I remember giving up on that last time  
when we added the -xmangled,
but I can't remember why now...

It might get a bit confusing, as __ZSomeMangledName and UnmangledName  
will actually create
two separate probes. Unless we force all input mappings to one state  
or the other, in which case you
might get the odd effect of

pid$target::__ZSomeMangledName:entry {}

resulting in the UnmangledName probe firing.

James M

___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org