[lttng-dev] Beginner question: how to inspect scheduling of multi-threaded user application?

2016-08-24 Thread David Aldrich
Hi

I am new to tracing in Linux and to lttng. I have a multi-threaded user 
application and I want to see:


1)  When the threads are scheduled to run

2)  Which cores the threads are running on.

I have installed lttng on Ubuntu 14.04 LTS.  I am expecting to visualise the 
trace using TraceCompass.

I have read the following doc section:

http://lttng.org/docs/#doc-tracing-your-own-user-application

In order to collect my trace, must I define custom tracepoint definitions ( in 
a tracepoint provider header file ), and insert tracepoints into my user 
application, or is there a simpler way of achieving my goal?

Best regards

David

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] [Qemu-devel] [PATCH 0/6] hypertrace: Lightweight guest-to-QEMU trace channel

2016-08-24 Thread Lluís Vilanova
Stefan Hajnoczi writes:

> On Sun, Aug 21, 2016 at 02:32:34PM +0200, Lluís Vilanova wrote:
>> Unfortuntely, I've been unable to to make dtrace recognise QEMU's events (I'm
>> only able to see the host kernel events). If someone with more experience on 
>> it
>> can help me use dtrace with QEMU's events, I'll also add the supporting 
>> library
>> to let dtrace do the callout to QEMU's moitor interface and control the 
>> events,
>> and add a prperly useful example of that on the hypertrace docs (which was my
>> original intention).

> Which "dtrace" and host OS are you using?

> QEMU builds with static user-space probes.  You need to tell DTrace or
> SystemTap to enable those probes in order to record trace data.

I'm using debian on a 4.6.0-1-amd64 kernel with systemtap 3.0.6.

I just gave it another try, and works if I use probes like:

  process("").mark("")

although they don't seem to appear on "stap -l" or anything like that (I cannot
find a "qemu" provider). But I'm still unable to print the event values. This:

  probe 
process("./install/vanilla/bin/qemu-system-i386").mark("guest_mem_before_exec")
  {
  printf("%p %lx %d\n", $arg1, $arg2, $arg3)
  }

always prints "0x0 0x0 0", which is clearly wrong (other backend on the same
build print the correct values).

Also, I'm still not sure how to interact with QEMU's monitor interface from
within the probe code (probes execute in kernel mode, including "guru mode"
code).

If anybody can shed some like into any of this, I'd appreaciate it.


Cheers,
  Lluis
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] [PATCH latency-tracker] Fix: sizeof() bug, state tracking merge issue, PID 0

2016-08-24 Thread Julien Desfossez
Merged, big thanks !

Julien

On 23-Aug-2016 07:37:01 PM, Mathieu Desnoyers wrote:
> - 3 sizeof() issues (using pointer size rather than object size),
> - state tracking merge issue on switch in: the get following the
>   insertion may not get the same node when there are duplicates,
> - also features a refactoring of keys: add a "p" parent field for the
>   type.
> - PID 0 needs to be compared with its cpu ID too.
> 
> Signed-off-by: Mathieu Desnoyers 
> ---
>  examples/rt.c | 164 
> +++---
>  latency_tracker.h |   7 +++
>  tracker.c |  20 ++-
>  3 files changed, 120 insertions(+), 71 deletions(-)
> 
> diff --git a/examples/rt.c b/examples/rt.c
> index f9703ca..7b3b2b3 100644
> --- a/examples/rt.c
> +++ b/examples/rt.c
> @@ -139,53 +139,59 @@ enum event_out_types {
>   OUT_NO_CB = 4,
>  };
>  
> +struct generic_key_t {
> + enum rt_key_type type;
> +} __attribute__((__packed__));
> +
>  struct do_irq_key_t {
> + struct generic_key_t p;
>   unsigned int cpu;
> - enum rt_key_type type;
>  } __attribute__((__packed__));
>  
>  struct local_timer_key_t {
> + struct generic_key_t p;
>   unsigned int cpu;
> - enum rt_key_type type;
>  } __attribute__((__packed__));
>  
>  struct hrtimer_key_t {
> + struct generic_key_t p;
>   unsigned int cpu;
> - enum rt_key_type type;
>  } __attribute__((__packed__));
>  
>  struct hardirq_key_t {
> + struct generic_key_t p;
>   unsigned int cpu;
> - enum rt_key_type type;
>  } __attribute__((__packed__));
>  
>  struct raise_softirq_key_t {
> + struct generic_key_t p;
>   unsigned int cpu;
>   unsigned int vector;
> - enum rt_key_type type;
>  } __attribute__((__packed__));
>  
>  struct softirq_key_t {
> + struct generic_key_t p;
>   unsigned int cpu;
>   int pid;
> - enum rt_key_type type;
>  } __attribute__((__packed__));
>  
>  struct waking_key_t {
> + struct generic_key_t p;
> + int cpu;
>   int pid;
> - enum rt_key_type type;
>  } __attribute__((__packed__));
>  
>  struct switch_key_t {
> + struct generic_key_t p;
> + int cpu;
>   int pid;
> - enum rt_key_type type;
>  } __attribute__((__packed__));
>  
>  #define MAX_COOKIE_SIZE 32
>  struct work_begin_key_t {
> + struct generic_key_t p;
>   char cookie[MAX_COOKIE_SIZE];
>   int cookie_size;
> - enum rt_key_type type;
>  } __attribute__((__packed__));
>  
>  /* Keep up-to-date with a list of all key structs. */
> @@ -445,6 +451,7 @@ static
>  int entry_do_irq(struct kretprobe_instance *p, struct pt_regs *regs)
>  {
>   enum latency_tracker_event_in_ret ret;
> + struct latency_tracker_event *s;
>   struct do_irq_key_t key;
>   u64 now;
>  
> @@ -452,26 +459,20 @@ int entry_do_irq(struct kretprobe_instance *p, struct 
> pt_regs *regs)
>   return 0;
>  
>   now = trace_clock_monotonic_wrapper();
> + key.p.type = KEY_DO_IRQ;
>   key.cpu = smp_processor_id();
> - key.type = KEY_DO_IRQ;
> - ret = _latency_tracker_event_in(tracker, &key, sizeof(key), 1, now,
> - NULL);
> + ret = _latency_tracker_event_in_get(tracker, &key, sizeof(key), 1, now,
> + NULL, &s);
>   if (ret != LATENCY_TRACKER_OK) {
>   failed_event_in++;
>   return 0;
>   }
> + WARN_ON_ONCE(!s);
>  
>   if (config.text_breakdown) {
> - struct latency_tracker_event *s;
> -
> - s = latency_tracker_get_event(tracker, &key, sizeof(key));
> - if (!s) {
> - BUG_ON(1);
> - return 0;
> - }
>   append_delta_ts(s, KEY_DO_IRQ, "do_IRQ", now, 0, NULL, 0);
> - latency_tracker_put_event(s);
>   }
> + latency_tracker_put_event(s);
>  
>  #ifdef DEBUG
>   printk("%llu do_IRQ (cpu %u)\n", trace_clock_monotonic_wrapper(),
> @@ -488,8 +489,8 @@ int exit_do_irq(struct kretprobe_instance *p, struct 
> pt_regs *regs)
>  
>   if (!config.irq_tracing)
>   return 0;
> + key.p.type = KEY_DO_IRQ;
>   key.cpu = smp_processor_id();
> - key.type = KEY_DO_IRQ;
>   latency_tracker_event_out(tracker, &key,
>   sizeof(key), OUT_IRQHANDLER_NO_CB, 0);
>  
> @@ -618,15 +619,13 @@ struct latency_tracker_event *event_transition(void 
> *key_in, int key_in_len,
>   }
>   orig_ts = latency_tracker_event_get_start_ts(event_in);
>  
> - ret = _latency_tracker_event_in(tracker, key_out,
> - key_out_len, unique, orig_ts, NULL);
> + ret = _latency_tracker_event_in_get(tracker, key_out,
> + key_out_len, unique, orig_ts, NULL, &event_out);
>   if (ret != LATENCY_TRACKER_OK) {
> - goto end_del;
>   failed_event_in++;
> - }
> - event_out = latency_tracker_get_event(tracker, key_out, key_out_len);
> - if (!event_out)
>

Re: [lttng-dev] [PATCH latency-tracker] Fix: use local ops for freelist per-cpu counter

2016-08-24 Thread Julien Desfossez
Merged, thanks !

Julien

On 23-Aug-2016 11:44:45 PM, Mathieu Desnoyers wrote:
> Signed-off-by: Mathieu Desnoyers 
> ---
>  tracker_private.h |  3 ++-
>  wrapper/freelist-ll.h | 21 +++--
>  2 files changed, 17 insertions(+), 7 deletions(-)
> 
> diff --git a/tracker_private.h b/tracker_private.h
> index 03386b8..fc27f80 100644
> --- a/tracker_private.h
> +++ b/tracker_private.h
> @@ -7,6 +7,7 @@
>  
>  #include 
>  #include 
> +#include 
>  
>  //#include "wrapper/ht.h"
>  //#include "rculfhash-internal.h"
> @@ -18,7 +19,7 @@ struct numa_pool {
>  };
>  
>  struct per_cpu_ll {
> - int current_count;
> + local_t current_count;
>   struct numa_pool *pool;
>   struct llist_head llist;
>  };
> diff --git a/wrapper/freelist-ll.h b/wrapper/freelist-ll.h
> index 5e4d18e..44db722 100644
> --- a/wrapper/freelist-ll.h
> +++ b/wrapper/freelist-ll.h
> @@ -242,8 +242,8 @@ int free_per_cpu_llist(struct latency_tracker *tracker)
>   if (!list)
>   continue;
>   cnt = free_event_list(list);
> - printk("freed %d on cpu %d (%d)\n", cnt, cpu,
> - ll->current_count);
> + printk("freed %d on cpu %d (%ld)\n", cnt, cpu,
> + local_read(&ll->current_count));
>   total_cnt += cnt;
>   }
>  
> @@ -284,10 +284,14 @@ struct llist_node *per_cpu_get(struct latency_tracker 
> *tracker)
>   struct per_cpu_ll *ll;
>  
>   ll = lttng_this_cpu_ptr(tracker->per_cpu_ll);
> + /*
> +  * Decrement the current count after we successfully remove an
> +  * element from the list.
> +  */
>   node = llist_del_first(&ll->llist);
>   if (node) {
> - ll->current_count--;
> - WARN_ON_ONCE(ll->current_count < 0);
> + local_dec(&ll->current_count);
> + WARN_ON_ONCE(local_read(&ll->current_count) < 0);
>   return node;
>   }
>   return llist_del_first(&ll->pool->llist);
> @@ -336,12 +340,17 @@ void __wrapper_freelist_put_event(struct 
> latency_tracker *tracker,
>   if (e->pool != ll->pool) {
>  //   printk("DEBUG cross-pool put_event\n");
>   llist_add(&e->llist, &ll->pool->llist);
> - } else if (ll->current_count < FREELIST_PERCPU_CACHE) {
> + } else if (local_read(&ll->current_count) < FREELIST_PERCPU_CACHE) {
>   /*
>* Fill our local cache if needed.
> +  * We need to increment current_count before we add the
> +  * element to the list, because when we successfully
> +  * remove an element from the list, we expect that the
> +  * counter is never negative. An interrupt can observe
> +  * the intermediate state.
>*/
> + local_inc(&ll->current_count);
>   llist_add(&e->llist, &ll->llist);
> - ll->current_count++;
>   } else {
>   /*
>* Add to our NUMA pool.
> -- 
> 2.1.4
> 
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] Beginner question: how to inspect scheduling of multi-threaded user application?

2016-08-24 Thread Francis Deslauriers
Hi David,
If you specifically want to trace the scheduling of the threads of your
app, you don't need custom tracepoints.
Enabling the sched_switch kernel event will give you both of cpu id and
thread id. Look at the cpu_id and next_tid fields.

You can enable the sched_switch event using : lttng enable-event -k
sched_switch

Cheers,
Francis

2016-08-24 3:17 GMT-04:00 David Aldrich :

> Hi
>
>
>
> I am new to tracing in Linux and to lttng. I have a multi-threaded user
> application and I want to see:
>
>
>
> 1)  When the threads are scheduled to run
>
> 2)  Which cores the threads are running on.
>
>
>
> I have installed lttng on Ubuntu 14.04 LTS.  I am expecting to visualise
> the trace using TraceCompass.
>
>
>
> I have read the following doc section:
>
>
>
> http://lttng.org/docs/#doc-tracing-your-own-user-application
>
>
>
> In order to collect my trace, must I define custom tracepoint definitions ( in
> a tracepoint provider header file ), and insert tracepoints into my user
> application, or is there a simpler way of achieving my goal?
>
>
>
> Best regards
>
>
>
> David
>
>
>
> ___
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
>
>
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] Beginner question: how to inspect scheduling of multi-threaded user application?

2016-08-24 Thread Jonathan Rajotte
Hi,

On Aug 24, 2016 12:18 PM, "Francis Deslauriers"  wrote:
>
> Hi David,
> If you specifically want to trace the scheduling of the threads of your
app, you don't need custom tracepoints.
> Enabling the sched_switch kernel event will give you both of cpu id and
thread id. Look at the cpu_id and next_tid fields.
>
> You can enable the sched_switch event using : lttng enable-event -k
sched_switch

In TraceCompass you can inspect this data with the control flow view and
the Ressource view under the Kernel analysis node under the trace node in
the project explorer.

I'm not sure of the base requirement for those views you can use the safe
enable-event:

lttng enable-event -k 'sched*'


You can also use "lttng track" to limi the gathering of event to a certain
pid.


Another way to reduce the scope would be to filter per procname:


lttng create
lttng add-context -k -t procname
lttng enable-event 'sched*' --filter '$ctx.procname == "PROCNAMEHERE"'

>
> Cheers,
> Francis
>
>
> 2016-08-24 3:17 GMT-04:00 David Aldrich :
>>
>> Hi
>>
>>
>>
>> I am new to tracing in Linux and to lttng. I have a multi-threaded user
application and I want to see:
>>
>>
>>
>> 1)  When the threads are scheduled to run
>>
>> 2)  Which cores the threads are running on.
>>
>>
>>
>> I have installed lttng on Ubuntu 14.04 LTS.  I am expecting to visualise
the trace using TraceCompass.
>>
>>
>>
>> I have read the following doc section:
>>
>>
>>
>> http://lttng.org/docs/#doc-tracing-your-own-user-application
>>
>>
>>
>> In order to collect my trace, must I define custom tracepoint
definitions ( in a tracepoint provider header file ), and insert
tracepoints into my user application, or is there a simpler way of
achieving my goal?
>>
>>
>>
>> Best regards
>>
>>
>>
>> David
>>
>>
>>
>>
>> ___
>> lttng-dev mailing list
>> lttng-dev@lists.lttng.org
>> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
>>
>
>
> ___
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
>
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] Beginner question: how to inspect scheduling of multi-threaded user application?

2016-08-24 Thread Jonathan Rajotte
Sorry had a sending problem.

Here is the rest.

On Wed, Aug 24, 2016 at 12:35 PM, Jonathan Rajotte <
jonathan.r.jul...@gmail.com> wrote:

> Hi,
>
> On Aug 24, 2016 12:18 PM, "Francis Deslauriers" <
> francis.deslauri...@mail.utoronto.ca> wrote:
> >
> > Hi David,
> > If you specifically want to trace the scheduling of the threads of your
> app, you don't need custom tracepoints.
> > Enabling the sched_switch kernel event will give you both of cpu id and
> thread id. Look at the cpu_id and next_tid fields.
> >
> > You can enable the sched_switch event using : lttng enable-event -k
> sched_switch
>
> In TraceCompass you can inspect this data with the control flow view and
> the Ressource view under the Kernel analysis node under the trace node in
> the project explorer.
>
> I'm not sure of the base requirement for those views you can use the safe
> enable-event:
>

replace "safe" by "easiest".


> lttng enable-event -k 'sched*'
>
>
> You can also use "lttng track" to limi the gathering of event to a certain
> pid.
>
>
> Another way to reduce the scope would be to filter per procname:
>
>
> lttng create
> lttng add-context -k -t procname
> lttng enable-event 'sched*' --filter '$ctx.procname == "PROCNAMEHERE"'
>
lttng start


PROCNAMEHERE can contain '*' wildcard. See the man page for more
information.

Cheers



> >
> > Cheers,
> > Francis
> >
> >
> > 2016-08-24 3:17 GMT-04:00 David Aldrich :
> >>
> >> Hi
> >>
> >>
> >>
> >> I am new to tracing in Linux and to lttng. I have a multi-threaded user
> application and I want to see:
> >>
> >>
> >>
> >> 1)  When the threads are scheduled to run
> >>
> >> 2)  Which cores the threads are running on.
> >>
> >>
> >>
> >> I have installed lttng on Ubuntu 14.04 LTS.  I am expecting to
> visualise the trace using TraceCompass.
> >>
> >>
> >>
> >> I have read the following doc section:
> >>
> >>
> >>
> >> http://lttng.org/docs/#doc-tracing-your-own-user-application
> >>
> >>
> >>
> >> In order to collect my trace, must I define custom tracepoint
> definitions ( in a tracepoint provider header file ), and insert
> tracepoints into my user application, or is there a simpler way of
> achieving my goal?
> >>
> >>
> >>
> >> Best regards
> >>
> >>
> >>
> >> David
> >>
> >>
> >>
> >>
> >> ___
> >> lttng-dev mailing list
> >> lttng-dev@lists.lttng.org
> >> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
> >>
> >
> >
> > ___
> > lttng-dev mailing list
> > lttng-dev@lists.lttng.org
> > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
> >
>



-- 
Jonathan Rajotte Julien
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


[lttng-dev] ★ lttng-dev, Ravi left a message for you

2016-08-24 Thread Ravi
Title: lttng-dev, Ravi left a message for you










			




See this email in English, Deutsch, Français, Italiano, Español, Português or 36 other languages.







Ravi left a message for you























You can instantly reply using our message exchange system.








Check your message







Some other people in the area:



























This email is a part of delivering a message sent by Ravi on the system. If you received this email by mistake, please just ignore it.  After a short time the message will be removed from the system.










Download on the App Store





Get it on Google Play





Download from Windows Store



You have received this email from Badoo Trading Limited (postal address below). If you do not wish to receive further email communications from Badoo, please click here to opt out. Badoo Trading Limited is a limited company registered in England and Wales under CRN 07540255 with its registered office at Media Village, 131-151 Great Titchfield Street, London, W1W 5BB.






Follow us:











			
		








___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev