Re: [PATCH 1/2] perf data: Show error message when ctf setup failed

2015-04-09 Thread Alexandre Montplaisir

On 2015-04-09 05:46 AM, Jiri Olsa wrote:

On Thu, Apr 09, 2015 at 04:19:20PM +0800, He Kuang wrote:

Hi, jirka
On 2015/4/9 1:45, Jiri Olsa wrote:

On Wed, Apr 08, 2015 at 12:49:19PM +0800, He Kuang wrote:

Show message when errors occurred during ctf conversion setup.

Before this patch:
   $ ./perf data convert --to-ctf=ctf
   $ echo $?
   255

After this patch:
   $ ./perf data convert --to-ctf=ctf
   Error during CTF convert setup.

so I have like 5 more patches from the original CTF set
which I'm holding until all works with tracecompass:
   http://marc.info/?l=linux-kernel&m=142736197610573&w=2

Is it working for you? How do you test resulted CTF data?

anyway the patch looks ok, just small nit below

I tested by using babeltrace binary and it works.

After receiving your reply, I test on the latest tracecompass. A
folder named 'ctf' is showed instead of the expected file
'ctf-data', this folder only contains the raw metadata and
perf-stream files but not analysed.

CC-ing Alexandre from tracecompass devel ^^^


Hi,

I just came back from vacation, sorry for not replying earlier!

I managed to compile perf with CTF support, but by using Babeltrace's 
commit 5584a48. It fails to compile against current master, because of 
private headers getting exposed. I reported that to the BT maintainers.


Then it seems there's another bug with Trace Compass's current master, 
trace validation cannot fail, and any file will get imported with no 
errors. We will look into this.
But the root of the problem was that the converted CTF trace was not 
being recognized as valid. This is because some events define "stream_id 
= 0;", and others don't specify a stream_id at all. It seems quite 
random, see the full metadata here: http://pastebin.com/pACgV5JU


Is there a reason why some events specify a stream_id and some don't?

We could patch Trace Compass to accept it, since Babeltrace does. But 
it's not very clear according to the spec, I'll check with the CTF guys 
if it should be considered valid or not.


Cheers,
Alexandre



jirka


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCHv3 0/8] perf tools: Add perf data CTF conversion

2015-01-16 Thread Alexandre Montplaisir

On 2015-01-15 03:57 PM, Alexandre Montplaisir wrote:

Hi,

I'm a developer for the Trace Compass tool (see links [3], [4] in 
Jiri's email). I can confirm that the generated CTF can be read 
correctly by our tool, which enables many views and analyses (Control 
Flow, CPU usage view, etc.) that were previously only available for 
LTTng traces.


Some of our users also use perf extensively, and are looking forward 
to this feature! Is there any ETA as to when this will be merged 
upstream?


Thanks,
Alexandre


That was a bit too fast, it seems there are issues with very recent 
versions of Babeltrace. You can follow the discussion at

http://lists.linuxfoundation.org/pipermail/diamon-discuss/2015-January/07.html

Cheers,
Alex




On 01/15/2015 11:15 AM, Jiri Olsa wrote:

hi,
this is follow up on original RFC patchset:
   http://marc.info/?t=14073273564&r=1&w=2

Basically we are adding 'perf data convert' command to
allow conversion of perf data file into CTF [1] data.

v3 changes:
   - rebased to latest acme's perf/core

v2 changes:
   - addressed comments from Namhyung
   - rebased to latest acme's perf/core

Changes from RFC:
   - able to generate CTF data, that are possible to be displayed under
 tracecompas GUI [3], please check several screenshots in here [4]
   - storing CTF data streams per cpu
   - several cleanups

Examples:
- Catch default perf data (cycles event):
   $ perf record ls
   [ perf record: Woken up 1 times to write data ]
   [ perf record: Captured and wrote 0.012 MB perf.data (~546 samples) ]

- To display converted CTF data run [2]:
   $ babeltrace ./ctf-data/
   [03:19:13.962125533] (+?.?) cycles: { }, { ip = 
0x8105443A, tid = 20714, pid = 20714, period = 1 }
   [03:19:13.962130001] (+0.04468) cycles: { }, { ip = 
0x8105443A, tid = 20714, pid = 20714, period = 1 }
   [03:19:13.962131936] (+0.01935) cycles: { }, { ip = 
0x8105443A, tid = 20714, pid = 20714, period = 8 }
   [03:19:13.962133732] (+0.01796) cycles: { }, { ip = 
0x8105443A, tid = 20714, pid = 20714, period = 114 }
   [03:19:13.962135557] (+0.01825) cycles: { }, { ip = 
0x8105443A, tid = 20714, pid = 20714, period = 2087 }
   [03:19:13.962137627] (+0.02070) cycles: { }, { ip = 
0x81361938, tid = 20714, pid = 20714, period = 37582 }
   [03:19:13.962161091] (+0.23464) cycles: { }, { ip = 
0x8124218F, tid = 20714, pid = 20714, period = 600246 }
   [03:19:13.962517569] (+0.000356478) cycles: { }, { ip = 
0x811A75DB, tid = 20714, pid = 20714, period = 1325731 }
   [03:19:13.969518008] (+0.007000439) cycles: { }, { ip = 
0x34080917B2, tid = 20714, pid = 20714, period = 1144298 }


- To get some nice output in tracecompas GUI [3], please capture sched:*
   and syscall tracepoints like:
   # perf record -e 'sched:*,raw_syscalls:*' -a
   ^C[ perf record: Woken up 0 times to write data ]
   [ perf record: Captured and wrote 412.347 MB perf.data (~18015721 
samples) ]


- To convert perf data file run:
   # perf data convert --to-ctf=./ctf
   [ perf data convert: Converted 'perf.data' into CTF data './ctf' ]
   [ perf data convert: Converted and wrote 408.421 MB (3964792 
samples) ]


- To display converted CTF data run [2]:
   # babeltrace ./ctf-data/
   [23:32:20.165354855] (+0.00507) sched:sched_wakeup: { cpu_id = 
0 }, { perf_ip = 0x810BCA72, perf_tid = 0, perf_pid = 0, 
perf_id = 462554, perf_period = 1, common_type = 265, ...
   [23:32:20.165359078] (+0.01181) sched:sched_switch: { cpu_id = 
0 }, { perf_ip = 0x8172A110, perf_tid = 0, perf_pid = 0, 
perf_id = 462562, perf_period = 1, common_type = 263, ...
   [23:32:20.165364686] (+0.00328) sched:sched_stat_runtime: { 
cpu_id = 0 }, { perf_ip = 0x810C8AE5, perf_tid = 5326, 
perf_pid = 5326, perf_id = 462610, perf_period = 11380, ...
   [23:32:20.165366067] (+0.01205) sched:sched_switch: { cpu_id = 
0 }, { perf_ip = 0x8172A110, perf_tid = 5326, perf_pid = 
5326, perf_id = 462562, perf_period = 1, common_type ...
   [23:32:20.165723312] (+0.01479) sched:sched_stat_runtime: { 
cpu_id = 2 }, { perf_ip = 0x810C8AE5, perf_tid = 11821, 
perf_pid = 11821, perf_id = 462612, perf_period = 1000265, ...
   [23:32:20.065282391] (+?.?) raw_syscalls:sys_enter: { 
cpu_id = 1 }, { perf_ip = 0x810230AF, perf_tid = 26155, 
perf_pid = 26155, perf_id = 462635, perf_period = 1, ...
   [23:32:20.065286422] (+0.04031) raw_syscalls:sys_exit: { 
cpu_id = 1 }, { perf_ip = 0x810231D8, perf_tid = 26155, 
perf_pid = 26155, perf_id = 462639, perf_period = 1, ...


- Or run tracecompass and open the CTF data ;-)

Changes are also reachable in here:
   git://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
   perf/core_ctf_convert

thanks,
jirka

[1] Common Trace Format - http://www.efficios.com/ctf
[2] babeltrace - http://www.efficios.com/

Re: [PATCHv3 0/8] perf tools: Add perf data CTF conversion

2015-01-15 Thread Alexandre Montplaisir

Hi,

I'm a developer for the Trace Compass tool (see links [3], [4] in Jiri's 
email). I can confirm that the generated CTF can be read correctly by 
our tool, which enables many views and analyses (Control Flow, CPU usage 
view, etc.) that were previously only available for LTTng traces.


Some of our users also use perf extensively, and are looking forward to 
this feature! Is there any ETA as to when this will be merged upstream?


Thanks,
Alexandre


On 01/15/2015 11:15 AM, Jiri Olsa wrote:

hi,
this is follow up on original RFC patchset:
   http://marc.info/?t=14073273564&r=1&w=2

Basically we are adding 'perf data convert' command to
allow conversion of perf data file into CTF [1] data.

v3 changes:
   - rebased to latest acme's perf/core

v2 changes:
   - addressed comments from Namhyung
   - rebased to latest acme's perf/core

Changes from RFC:
   - able to generate CTF data, that are possible to be displayed under
 tracecompas GUI [3], please check several screenshots in here [4]
   - storing CTF data streams per cpu
   - several cleanups

Examples:
- Catch default perf data (cycles event):
   $ perf record ls
   [ perf record: Woken up 1 times to write data ]
   [ perf record: Captured and wrote 0.012 MB perf.data (~546 samples) ]

- To display converted CTF data run [2]:
   $ babeltrace ./ctf-data/
   [03:19:13.962125533] (+?.?) cycles: { }, { ip = 0x8105443A, 
tid = 20714, pid = 20714, period = 1 }
   [03:19:13.962130001] (+0.04468) cycles: { }, { ip = 0x8105443A, 
tid = 20714, pid = 20714, period = 1 }
   [03:19:13.962131936] (+0.01935) cycles: { }, { ip = 0x8105443A, 
tid = 20714, pid = 20714, period = 8 }
   [03:19:13.962133732] (+0.01796) cycles: { }, { ip = 0x8105443A, 
tid = 20714, pid = 20714, period = 114 }
   [03:19:13.962135557] (+0.01825) cycles: { }, { ip = 0x8105443A, 
tid = 20714, pid = 20714, period = 2087 }
   [03:19:13.962137627] (+0.02070) cycles: { }, { ip = 0x81361938, 
tid = 20714, pid = 20714, period = 37582 }
   [03:19:13.962161091] (+0.23464) cycles: { }, { ip = 0x8124218F, 
tid = 20714, pid = 20714, period = 600246 }
   [03:19:13.962517569] (+0.000356478) cycles: { }, { ip = 0x811A75DB, 
tid = 20714, pid = 20714, period = 1325731 }
   [03:19:13.969518008] (+0.007000439) cycles: { }, { ip = 0x34080917B2, tid = 
20714, pid = 20714, period = 1144298 }

- To get some nice output in tracecompas GUI [3], please capture sched:*
   and syscall tracepoints like:
   # perf record -e 'sched:*,raw_syscalls:*' -a
   ^C[ perf record: Woken up 0 times to write data ]
   [ perf record: Captured and wrote 412.347 MB perf.data (~18015721 samples) ]

- To convert perf data file run:
   # perf data convert --to-ctf=./ctf
   [ perf data convert: Converted 'perf.data' into CTF data './ctf' ]
   [ perf data convert: Converted and wrote 408.421 MB (3964792 samples) ]

- To display converted CTF data run [2]:
   # babeltrace ./ctf-data/
   [23:32:20.165354855] (+0.00507) sched:sched_wakeup: { cpu_id = 0 }, { 
perf_ip = 0x810BCA72, perf_tid = 0, perf_pid = 0, perf_id = 462554, 
perf_period = 1, common_type = 265, ...
   [23:32:20.165359078] (+0.01181) sched:sched_switch: { cpu_id = 0 }, { 
perf_ip = 0x8172A110, perf_tid = 0, perf_pid = 0, perf_id = 462562, 
perf_period = 1, common_type = 263, ...
   [23:32:20.165364686] (+0.00328) sched:sched_stat_runtime: { cpu_id = 0 
}, { perf_ip = 0x810C8AE5, perf_tid = 5326, perf_pid = 5326, perf_id = 
462610, perf_period = 11380, ...
   [23:32:20.165366067] (+0.01205) sched:sched_switch: { cpu_id = 0 }, { 
perf_ip = 0x8172A110, perf_tid = 5326, perf_pid = 5326, perf_id = 
462562, perf_period = 1, common_type  ...
   [23:32:20.165723312] (+0.01479) sched:sched_stat_runtime: { cpu_id = 2 
}, { perf_ip = 0x810C8AE5, perf_tid = 11821, perf_pid = 11821, perf_id 
= 462612, perf_period = 1000265, ...
   [23:32:20.065282391] (+?.?) raw_syscalls:sys_enter: { cpu_id = 1 }, 
{ perf_ip = 0x810230AF, perf_tid = 26155, perf_pid = 26155, perf_id = 
462635, perf_period = 1, ...
   [23:32:20.065286422] (+0.04031) raw_syscalls:sys_exit: { cpu_id = 1 }, { 
perf_ip = 0x810231D8, perf_tid = 26155, perf_pid = 26155, perf_id = 
462639, perf_period = 1, ...

- Or run tracecompass and open the CTF data ;-)

Changes are also reachable in here:
   git://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
   perf/core_ctf_convert

thanks,
jirka

[1] Common Trace Format - http://www.efficios.com/ctf
[2] babeltrace - http://www.efficios.com/babeltrace
[3] Trace compass - http://projects.eclipse.org/projects/tools.tracecompass
[4] screenshots - http://people.redhat.com/~jolsa/tracecompass-perf/


Cc: Arnaldo Carvalho de Melo 
Cc: David Ahern 
Cc: Dominique Toupin 
Cc: Frederic Weisbecker 
Cc: Jeremie Galarneau 
Cc: Jiri Olsa 
Cc: Mathieu Desnoyers 
Cc: Namhyung Kim 
Cc: Paul Mackerras 
Cc: Pete

Re: Support for Perf CTF traces now in master (was Re: FW: [RFC 0/5] perf tools: Add perf data CTF conversion)

2014-11-27 Thread Alexandre Montplaisir


On 11/27/2014 10:43 AM, Jiri Olsa wrote:

On Wed, Nov 12, 2014 at 05:14:45PM -0500, Alexandre Montplaisir wrote:


Testing welcome!

hi,
any other way besides compiling eclipse to test this? For pure mortals
with Fedora eclipse rpm.. ;-)


If you already have an Eclipse installation, you can use the update site:
http://download.eclipse.org/tracecompass/master/nightly/
This would install the plugins into that Eclipse install.

We will also put nightly builds of the stand-alone version on the 
download page Very Soon(TM). We just have a few issues left to figure out.


Cheers,
Alexandre


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Support for Perf CTF traces now in master (was Re: FW: [RFC 0/5] perf tools: Add perf data CTF conversion)

2014-11-26 Thread Alexandre Montplaisir

On 2014-11-26 12:37 PM, Sebastian Andrzej Siewior wrote:

* Alexandre Montplaisir | 2014-11-12 17:14:45 [-0500]:


Just a quick note, this branch is now merged to master. So anyone who
pulls the code from the master branch at
git://git.eclipse.org/gitroot/tracecompass/org.eclipse.tracecompass.git
should be able to load perf-CTF traces in the viewer. The trace type
is now called "Common Trace Format -> Linux Kernel Trace" and should
support both LTTng kernel and perf traces in CTF format (although
auto-detection should work in most cases).

Thank you for all the work.
Let me try to reply to the emails at once here:
- I added to the environment metadata the following (comparing to the
   last version):
domain => kernel
tracer_name => perf

   There is no tracer_major + minor. Instead I added
   version => perf's version
   On my system I have:
  release = "3.16.0-4-amd64";
  version = "3.18.rc3.g91405a";

   Because I run Debian's v3.16 and recorded the trace with perf from the
   kernel 3.18.rc3.
   There is no version of perf doing the convert of perf.data => ctf.
   Any objections, rename of the version member?

- Mathieu decided that it makes no sense to add the kernel version to
   each event we trace. Instead each event should have its own version
   with a major/minor member. Once the event is changed the "ABI"
   version should be adjusted. I second this since it makes sense.
   Therefore there are no changes that were made to the converter.

- Alexandre (you) noticed that there are syscall names in the events
   recorded via "sys_enter and sys_exit". This is true, but there is a
   hint. There is an event for instance:

[03:37:07.579969498] (+?.?) raw_syscalls:sys_enter: { cpu_id = 2 }, { 
perf_ip = 0x81020EBC, perf_tid = 30004, perf_pid = 30004, perf_id = 
382, perf_period = 1, common_type = 76, common_flags = 0, common_preempt_count 
= 0, common_pid = 30004, id = 16, args = [ [0] = 0xE, [1] = 0x2400, [2] = 0x0, 
[3] = 0x0, [4] = 0xA20F00, [5] = 0xA1FDA0 ] }


Oh ok, so this "id" field is really the system call ID then. Good to 
know, thanks!




   By the end you notice id=16 and args. args are the Arguments passed
   to syscall and id is the syscall number. Together with machine =
   x86_64 you know which architecture you need to lookup the number 16.
   The numbers are from unistd.h (and may be different between architectures,
   even between i386 & x86_64). strace has for instance the following [0] table.

{ 3,TD, sys_read,   "read"  },  /* 0 */
…
{ 3,TD, sys_ioctl,  "ioctl" },  /* 16 */
…

So 16 is ioctl. strace has those tables for a bunch of architectures
so it might be helpful to suck them in. I know no other way to ease
things here.


Indeed. Well this information could be part of the trace metadata too, 
but I guess that wouldn't be very practical.
We'll just need to add a way for each supported tracer to advertize how 
it gets its system call names.




[0] https://github.com/bnoordhuis/strace/blob/master/linux/x86_64/syscallent.h

The same thing is true for softirq_entry events for instance. This event
will give you you only vec=9 and you need to lookup that 9 => RCU. That
one is easy however:

  const char * const softirq_to_name[NR_SOFTIRQS] = {
  "HI", "TIMER", "NET_TX", "NET_RX", "BLOCK", "BLOCK_IOPOLL",
  "TASKLET", "SCHED", "HRTIMER", "RCU"
  };

this has been taken from kernel/softirq.c.


Oh, that's right, we never got around to getting/showing the actual 
names of the soft IRQs. Thanks for reminding us. ;)



This was based on the most recent file format I was aware of, we will
update it accordingly if required.

Testing welcome!

I pushed the perf changes I mentioned to

   git://git.breakpoint.cc/bigeasy/linux.git ctf_convert_7

It is now based on Arnaldo's perf/core. If everything goes well from
the compass side and nobody complains here in any way, the next step
would be to present the patches on the mailing list and compass as a
user.

I took you tree and added the patch below. I uploaded the following
files to https://breakpoint.cc/perf-ctf/:
- ctf-out6.tar.xz from perf6.data.xz
   shows nothing


Hmm, indeed it throws exceptions in the console when trying to validate 
the trace. It seems to read a packet size in perf_stream_0 as a negative 
value. Babeltrace handles it fine though, so we're probably reading it 
wrong. We'll investigate.


Cheers,
Alexandre



- ctf-out7.tar.xz from perf7.data.xz
   shows something

The only obvious difference is the size of the CTF data. The size of out6
is almost 300MiB and it contains 3,259,929 events. The out7 is has only
15MiB and contains 152,900 events.


Cheers,
Alexandre

✂

 From 7ffa619d918f2010046b391ae29063f

Support for Perf CTF traces now in master (was Re: FW: [RFC 0/5] perf tools: Add perf data CTF conversion)

2014-11-12 Thread Alexandre Montplaisir


On 11/09/2014 08:31 PM, Alexandre Montplaisir wrote:

On 2014-11-05 10:25 PM, Alexandre Montplaisir wrote:





But if you could for example tell me the perf equivalents of all the
strings in that file, I could hack together such wrapper. With that,
in theory, perf traces should behave exactly the same as LTTng traces
in the viewer!

Oooh, that would be awesome. So I installed maven but didn't get much
further. Let me gather this for you.


Awesome, thanks!

I am travelling this week, so I'm a bit busy, but I will try to 
prototype a "wrapper" for the kernel analysis, and adding support for 
the perf events, whenever I have a chance. I'll keep you posted.


Ok, some good news!

I managed to get the CTF traces from perf working in Trace Compass! 
See attached screenshots. This is showing the "ctf-out2" trace from 
your previous email. The other trace seems to have less events 
enabled, so it would only show some WAIT_FOR_CPU states in the view.


If anybody wishes to try it, you can grab the whole branch ending at 
https://git.eclipse.org/r/#/c/36200/ . Or run:
$ git fetch 
git://git.eclipse.org/gitroot/tracecompass/org.eclipse.tracecompass 
refs/changes/00/36200/3 && git checkout FETCH_HEAD


Just a quick note, this branch is now merged to master. So anyone who 
pulls the code from the master branch at

git://git.eclipse.org/gitroot/tracecompass/org.eclipse.tracecompass.git
should be able to load perf-CTF traces in the viewer. The trace type is 
now called "Common Trace Format -> Linux Kernel Trace" and should 
support both LTTng kernel and perf traces in CTF format (although 
auto-detection should work in most cases).


This was based on the most recent file format I was aware of, we will 
update it accordingly if required.


Testing welcome!

Cheers,
Alexandre



It reuses much of the code from the LTTng analysis, which is why it 
was relatively quick to do. For now, it looks for the domain in the 
CTF environment to be "kernel-perf". But this will be easy to update, 
if needed, once the final format is decided.


Maybe I missed it, but I couldn't find the system call names in the 
trace. Using the sys_enter and sys_exit events, the viewer is able to 
determine the kernel-mode states (in blue), but we cannot show the 
exact system call names like we do with LTTng.
There is also something weird with the arrows in the Control Flow View 
(disabled in the screenshot), I don't know if it's due to the 
particularity of the trace or to a bug in the view. We'll investigate.


Feedback is very welcome.

Cheers,
Alexandre


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC 0/5] perf tools: Add perf data CTF conversion

2014-11-05 Thread Alexandre Montplaisir

Hi Mathieu,


On 11/05/2014 06:21 PM, Mathieu Desnoyers wrote:

[...]

The cpu_id field change will be addressed soon on our side.
Now, the remaining things:
The "domain = kernel" thingy (or another identifier if desired) is
something we could add.

Unless the event data is exactly the same, it would be easier to use
a different name. Like "kernel-perf" for instance?

Some kind of a namespace / identifier is probably not wrong. The lttng
tracer added a tracer version probably in case the format changes
between version for some reason. Perf comes with the kernel so for this
the kernel version should sufficient.

Yes, using the kernel version for Perf makes sense. I reach a similar
conclusion for LTTng: we should add tracepoint semantic versioning
somewhere in the CTF metadata, because the semantic of an event can
change based on the LTTng version, and based on which kernel version
LTTng is tracing.

A very good example is the semantic of the sched_wakeup event. It has
changed due to scheduler code modification, and is now called from an
IPI context, which changes its semantic (not called from the same
PID). Unfortunately, there is little we can do besides checking the
kernel version to detect the semantic change from the trace viewer
side, because neither the event nor the field names have changed.

The trace viewer could therefore care about the following information
to identify the semantic of a trace:

- Tracer name (e.g. lttng or perf),
- Domain (e.g. kernel or userspace),
- Tracepoint versioning (e.g. kernel version for Perf).


Sounds good. So perf-CTF traces could still use the "kernel" domain, but 
the CTF environment metadata would also mention the tracer, which could 
be so far either lttng or perf. For now we only look at the domain to 
infer the trace type, but we could also look at the tracer, and tracer 
version, to determine which event and field naming to use for the analysis.


I can also see how in general, versioning the "instrumentation" of an 
instrumented program could be useful. For example, LTTng changed the 
name of their syscall events in 2.6. The event still represents the same 
thing from an analysis's point of view, only the name changed.



Because CTF supports both kernel and userspace tracing, we also want
to solve this semantic detection problem both for the kernel and
userspace. Therefore, we should consider how the userspace
tracepoints could save version information in the user-space metadata
too.

Since we have traces shared across applications (per user-ID buffers)
in lttng-ust, the semantic info, and therefore the versioning, should
be done on a per-provider (or per-event) basis, rather than trace-wide,
because a single trace could contain events from various applications,
each with their own set of providers, therefore each with their
versioning info.


Hmm, where would this per-tracepoint version come from? From the version 
of the application? From a new "instrumentation version" defined 
somewhere? Or would the maintainers of the application have to manually 
version every single tracepoint in their program?


Per-tracepoint versioning, at first glance, seems a bit heavy. I'd have 
to understand more about it to make an informed opinion though ;) But 
this seems to be a problem for userspace traces only, right? Because 
with kernel traces

1) the tracers put the kernel version in the environment metadata and
2) you can't have more than one kernel provider in the same CTF trace 
(can you?)


But from a trace viewer's analysis point of view, I think it would make 
sense. If events in the trace supply a version (in addition to its 
name/type), then the analysis may decide to handle different versions of 
an event in different ways.





So if we apply this description scheme to the kernel tracing case,
this would mean that each event in the CTF metadata would have
version information. For Perf, this could very well be the kernel
version that we simply repeat for each event metadata entry. For
LTTng-modules, we would have our own versioning that is independent
of the kernel version, since the semantic of the events we expose
can change for a given kernel version as lttng-modules evolves.

In summary, for perf it would be really easy: just repeat the
kernel version in a new attribute attached to each event in the
metadata. For LTTng we would have the flexibility to have our own
version numbers in there. This would also cover the case of
userspace tracing, allowing each application to advertise their
tracepoint provider semantic changes through versioning.


>From the user's point of view, both would still be Linux Kernel

Traces, but we could use the domain internally to determine which
event/field layout to use.

Mathieu, any thoughts on how CTF domains should be namespaced?

(see above)


Now that I identified the differences between the CTF from lttng and
perf, any suggestions / ideas how this could be solved?

I suppose it would be better/cleaner if the event and f

Re: FW: [RFC 0/5] perf tools: Add perf data CTF conversion

2014-11-05 Thread Alexandre Montplaisir


On 11/05/2014 01:50 PM, Sebastian Andrzej Siewior wrote:

[...]

If the trace events from both LTTng and perf represent the same thing
(and I assume they should, since they come from the same tracepoints,
right?), then we could just add a wrapper on the viewer side to
decide which event/field names to use, depending on the trace type.

Right now, we only define LTTng event and field names:
http://git.eclipse.org/c/tracecompass/org.eclipse.tracecompass.git/tree/org.eclipse.tracecompass.lttng2.kernel.core/src/org/eclipse/tracecompass/internal/lttng2/kernel/core/LttngStrings.java

Okay. So I found this file for linuxtools now let me try tracecompass.
The basic renaming should do the job. Then I have to figure out how to
compile this thingy…


"mvn clean install". It is the Maven equivalent of "./configure && make" ;)

Or if you want to build a standalone application (RCP):
mvn clean install -Pbuild-rcp -Dmaven.test.skip=true

see the README file in the git tree for details.



There is this one thing where you go for "tid" while perf says "pid". I
guess I could figure that out once I have the rename done.
We don't have lttng_statedump_process_state, this look lttng specific. I
would have to look if there is a replacement event in perf.


Yes, the state dump is something specific to LTTng. It allows us to know 
about processes that exist on the system, even if they are sleeping for 
the whole duration of the trace (and thus, would not show up in the 
trace at all).


But even if these events are not present, we can still know about active 
processes when they do shed_switch's for example.



I have no idea what we could do about the "unknown" events, say someone
enbales skb tracing. But this is probably something for once we are
done with the basic integration.


But if you could for example tell me the perf equivalents of all the
strings in that file, I could hack together such wrapper. With that,
in theory, perf traces should behave exactly the same as LTTng traces
in the viewer!

Oooh, that would be awesome. So I installed maven but didn't get much
further. Let me gather this for you.


Awesome, thanks!

I am travelling this week, so I'm a bit busy, but I will try to 
prototype a "wrapper" for the kernel analysis, and adding support for 
the perf events, whenever I have a chance. I'll keep you posted.




So first the renaming:
diff --git a/LttngStrings.java b/LttngStrings.java
--- a/LttngStrings.java
+++ b/LttngStrings.java
@@ -27,17 +27,17 @@ public interface LttngStrings {
  
  /* Event names */

  static final String EXIT_SYSCALL = "exit_syscall";
-static final String IRQ_HANDLER_ENTRY = "irq_handler_entry";
-static final String IRQ_HANDLER_EXIT = "irq_handler_exit";
-static final String SOFTIRQ_ENTRY = "softirq_entry";
-static final String SOFTIRQ_EXIT = "softirq_exit";
-static final String SOFTIRQ_RAISE = "softirq_raise";
-static final String SCHED_SWITCH = "sched_switch";
-static final String SCHED_WAKEUP = "sched_wakeup";
-static final String SCHED_WAKEUP_NEW = "sched_wakeup_new";
-static final String SCHED_PROCESS_FORK = "sched_process_fork";
-static final String SCHED_PROCESS_EXIT = "sched_process_exit";
-static final String SCHED_PROCESS_FREE = "sched_process_free";
+static final String IRQ_HANDLER_ENTRY = "irq:irq_handler_entry";
+static final String IRQ_HANDLER_EXIT = "irq:irq_handler_exit";
+static final String SOFTIRQ_ENTRY = "irq:softirq_entry";
+static final String SOFTIRQ_EXIT = "irq:softirq_exit";
+static final String SOFTIRQ_RAISE = "irq:softirq_raise";
+static final String SCHED_SWITCH = "sched:sched_switch";
+static final String SCHED_WAKEUP = "sched:sched_wakeup";
+static final String SCHED_WAKEUP_NEW = "sched:sched_wakeup_new";
+static final String SCHED_PROCESS_FORK = "sched:sched_process_fork";
+static final String SCHED_PROCESS_EXIT = "sched:sched_process_exit";
+static final String SCHED_PROCESS_FREE = "sched:sched_process_free";
  static final String STATEDUMP_PROCESS_STATE = 
"lttng_statedump_process_state";
  
  /* System call names */


I have no idea how exit_syscall is different from irq_handler_exit and I
think we have to skip for now on STATEDUMP_PROCESS_STATE.

For the syscalls:

- static final String SYSCALL_PREFIX = "sys_";
   It is bassicaly:
 syscalls:sys_enter_
 syscalls:sys_exit_
   depending what you are looking for.

- static final String COMPAT_SYSCALL_PREFIX = "compat_sys_";
   I haven't found this. Could it be that we don't record the compat_sys
   at all?


IIRC, compat_sys is for instance for 32-bit system calls on a 64-bit 
kernel. Perhaps the "compat" system calls are recorded as standard 
system call events with perf? We could test it once we get the base 
things working.



- static final String SYS_CLONE = "sys_clone";
   here we have
 syscalls:sys_enter_clone
 syscalls:sys_exit_clone
   I guess the enter is what you are looking for.

Re: FW: [RFC 0/5] perf tools: Add perf data CTF conversion

2014-11-03 Thread Alexandre Montplaisir

Hi Sebastian,

On 11/03/2014 06:58 PM, Sebastian Andrzej Siewior wrote:

[...]
I did on the linux side:

|perf record \
| -e sched:sched_switch \
| -a

This gave me perf.data trace. No with the new extension I converted it
into CTF data stream (perf data convert -i perf.data --to-ctf ctf) and
imported it into eclipse as a new trace. By setting 'domain = "kernel"'
I managed to get it accepted as a kernel trace.

Additionally I had to:
- rename sched:sched_switch -> sched_switch (I dropped the other events)
- rename "perf_cpu" to "cpu_id" and move it within "packet context"
   (had a patch for that but we wanted to merge this later)
- I added "prev_tid" with the content of "prev_pid" and I added
   "next_tid" with the content of "next_pid". Now exclipse does not
   "freeze" after loading the first entry
- I prefixed every entry with _ (so "prev_tid" becomes "_prev_tid") and
   now it seems to be recognized by the tracing plugin.

puh. Now I have something in "cpu usage", "control flow" windows.


This is really great! Initially, I had believed that we would have 
needed to add a separate parser plugin, and to consider "perf traces" as 
a completely different beast from LTTng traces. However if you can get 
this close to they way LTTng presents its data, then we can probably 
re-use most of the existing code. In which case we could rename the 
"LTTng Kernel Trace" type in the UI to simply "Linux Kernel Trace". And 
that would cover both LTTng kernel traces and CTF-perf traces.



The cpu_id field change will be addressed soon on our side.
Now, the remaining things:
The "domain = kernel" thingy (or another identifier if desired) is
something we could add.


Unless the event data is exactly the same, it would be easier to use a 
different name. Like "kernel-perf" for instance?
From the user's point of view, both would still be Linux Kernel Traces, 
but we could use the domain internally to determine which event/field 
layout to use.


Mathieu, any thoughts on how CTF domains should be namespaced?


What do we do on the naming convention?

The converter takes basically the event names the way they come from
perf. That is why we have a "system" prefix.
For the member fields, we take all the specific ones as perf serves
them. The only exception is for the generic fields which get a "perf_"
prefix in order not to clash with the specific ones.
And there is this _ prefix in front of every data member.

Now that I identified the differences between the CTF from lttng and
perf, any suggestions / ideas how this could be solved?


I suppose it would be better/cleaner if the event and field names would 
remain the same, or at least be similar, in the perf.data and perf-CTF 
formats.


If the trace events from both LTTng and perf represent the same thing 
(and I assume they should, since they come from the same tracepoints, 
right?), then we could just add a wrapper on the viewer side to decide 
which event/field names to use, depending on the trace type.


Right now, we only define LTTng event and field names:
http://git.eclipse.org/c/tracecompass/org.eclipse.tracecompass.git/tree/org.eclipse.tracecompass.lttng2.kernel.core/src/org/eclipse/tracecompass/internal/lttng2/kernel/core/LttngStrings.java

But if you could for example tell me the perf equivalents of all the 
strings in that file, I could hack together such wrapper. With that, in 
theory, perf traces should behave exactly the same as LTTng traces in 
the viewer!



Cheers,
Alexandre

ps.  fyi, we have recently moved the trace viewer code to its own 
project, now called "Trace Compass". We will still support the Linux 
Tools plugins for the time being, but all new developments are done in 
Trace Compass. If you want to check it out: http://eclipse.org/tracecompass


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: FW: [RFC 0/5] perf tools: Add perf data CTF conversion

2014-08-21 Thread Alexandre Montplaisir


On 08/21/2014 12:58 PM, Jiri Olsa wrote:

hum, I've got nothing from babeltrace:

[jolsa@krava ~]$ su
Password:
[root@krava jolsa]# lttng create perf
Spawning a session daemon
Session perf created.
Traces will be written in /root/lttng-traces/perf-20140821-184956
[root@krava jolsa]# lttng add-context -k -t prio -t perf:cpu:cycles


Oh I see the problem, you don't have any events enabled! In LTTng terms, 
a "context" is a piece of information that gets attached to every event. 
But if you don't have any event at all, you're not gonna see much 
context information. ;)


Try adding a
# lttng enable-event -a -k
before starting the session. This should give you some output in the 
viewers.



Cheers,
Alexandre



kernel context prio added to all channels
kernel context perf:cpu:cycles added to all channels
[root@krava jolsa]# lttng start
Tracing started for session perf
[root@krava jolsa]# lttng stop
Waiting for data availability.
Tracing stopped for session perf
[root@krava jolsa]# lttng destroy
Session perf destroyed
[root@krava jolsa]# babeltrace ~/lttng-traces/perf-20140821-184956/
[root@krava jolsa]# babeltrace ~/lttng-traces/perf-20140821-184956/kernel/
[root@krava jolsa]#

and empty view in eclipse

thanks,
jirka


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: FW: [RFC 0/5] perf tools: Add perf data CTF conversion

2014-08-20 Thread Alexandre Montplaisir

On 08/20/2014 05:28 AM, Jiri Olsa wrote:


ok, easy enough ;-) so I'm guessing this governs the expected
CTF layout for event/stream headers/contexts, right?


Correct, if the domain is "kernel" we then assume that the rest of the 
trace contains the expected elements of a kernel trace.


Of course, one could craft a CTF trace to advertize itself as "kernel" 
or "ust", and not actually have the layout of that trace type, in which 
case it would fail parsing later on.



Also judging from the trace display, you have hardcoded specific
displays/actions for specific events? That's all connected/specific
under trace type?


Yes the trace type is the main "provider" of functionality. I could go 
into more details, but we use Eclipse extension points to define which 
columns to put in the event table, which views are available, etc. for 
each supported trace type.



Once we have some views or analysis specific to perf CTF traces, we could
definitely add a separate trace type for those too.

I guess tracepoints and breakpoints should display something like
the standard kernel trace. As for HW events it's usual to display
profile infomation as the perf report does:
   https://perf.wiki.kernel.org/index.php/Tutorial#Sampling_with_perf_record


Interesting, I haven't tried the perf CTF output yet, but I could see it 
using the Statistics view (which by default just prints the % of events, 
per event type) to print the results of the different "perf reports", 
calculated from the CTF events. Eventually with pie charts!



I tried to record/display lttng event perf:cpu:cycles, but got nothing
displayed in eclipse. Looks like this data provides only summary count
of the event for the workload?


Just to be sure I understand, you recorded an LTTng kernel trace, in 
which you enabled the "perf:cpu:cycles" context? Did this trace display 
anything in Babeltrace?
It should display the same in the Eclipse viewer, the value of the 
context will be visible in the "Contents" column in the the table (and 
in the Properties view), although for now we don't make any use of it.


From what I understand, yes, the value of the different perf:* contexts 
represents the value of the perf counter at the moment the tracepoint 
was hit. So you cannot get perf data "in between" trace events when 
using this method.



Cheers,
Alexandre
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/