Re: [PATCH v2 2/2] perf: riscv: Add Document for Future Porting Guide

2018-04-04 Thread Alan Kao
Hi Alex,

On Tue, Apr 03, 2018 at 07:08:43PM -0700, Alex Solomatnikov wrote:
> Doc fixes:
> 
>

Thanks for these fixes.  I'll edit this patch and send a v3 once I am done
with the PMU patch.

I suppose a "Reviewed-by: Alex Solomatnikov" appending at the end of the
commit will be great, right?

Alan

> diff --git a/Documentation/riscv/pmu.txt b/Documentation/riscv/pmu.txt
> index a3e930e..ae90a5e 100644
> --- a/Documentation/riscv/pmu.txt
> +++ b/Documentation/riscv/pmu.txt
> @@ -20,7 +20,7 @@ the lack of the following general architectural
> performance monitoring features:
>  * Enabling/Disabling counters
>Counters are just free-running all the time in our case.
>  * Interrupt caused by counter overflow
> -  No such design in the spec.
> +  No such feature in the spec.
>  * Interrupt indicator
>It is not possible to have many interrupt ports for all counters, so an
>interrupt indicator is required for software to tell which counter has
> @@ -159,14 +159,14 @@ interrupt for perf, so the details are to be
> completed in the future.
> 
>  They seem symmetric but perf treats them quite differently.  For reading, 
> there
>  is a *read* interface in *struct pmu*, but it serves more than just reading.
> -According to the context, the *read* function not only read the content of 
> the
> -counter (event->count), but also update the left period to the next interrupt
> +According to the context, the *read* function not only reads the content of 
> the
> +counter (event->count), but also updates the left period for the next 
> interrupt
>  (event->hw.period_left).
> 
>  But the core of perf does not need direct write to counters.  Writing 
> counters
> -hides behind the abstraction of 1) *pmu->start*, literally start
> counting so one
> +is hidden behind the abstraction of 1) *pmu->start*, literally start
> counting so one
>  has to set the counter to a good value for the next interrupt; 2)
> inside the IRQ
> -it should set the counter with the same reason.
> +it should set the counter to the same reasonable value.
> 
>  Reading is not a problem in RISC-V but writing would need some effort, since
>  counters are not allowed to be written by S-mode.
> @@ -190,37 +190,37 @@ Three states (event->hw.state) are defined:
>  A normal flow of these state transitions are as follows:
> 
>  * A user launches a perf event, resulting in calling to *event_init*.
> -* When being context-switched in, *add* is called by the perf core, with flag
> -  PERF_EF_START, which mean that the event should be started after it is 
> added.
> -  In this stage, an general event is binded to a physical counter, if any.
> +* When being context-switched in, *add* is called by the perf core, with a 
> flag
> +  PERF_EF_START, which means that the event should be started after
> it is added.
> +  At this stage, a general event is bound to a physical counter, if any.
>The state changes to PERF_HES_STOPPED and PERF_HES_UPTODATE,
> because it is now
>stopped, and the (software) event count does not need updating.
>  ** *start* is then called, and the counter is enabled.
> -   With flag PERF_EF_RELOAD, it write the counter to an appropriate
> value (check
> -   previous section for detail).
> -   No writing is made if the flag does not contain PERF_EF_RELOAD.
> -   The state now is reset to none, because it is neither stopped nor update
> -   (the counting already starts)
> -* When being context-switched out, *del* is called.  It then checkout all the
> -  events in the PMU and call *stop* to update their counts.
> +   With flag PERF_EF_RELOAD, it writes an appropriate value to the
> counter (check
> +   the previous section for details).
> +   Nothing is written if the flag does not contain PERF_EF_RELOAD.
> +   The state now is reset to none, because it is neither stopped nor updated
> +   (the counting already started)
> +* When being context-switched out, *del* is called.  It then checks out all 
> the
> +  events in the PMU and calls *stop* to update their counts.
>  ** *stop* is called by *del*
> and the perf core with flag PERF_EF_UPDATE, and it often shares the same
> subroutine as *read* with the same logic.
> The state changes to PERF_HES_STOPPED and PERF_HES_UPTODATE, again.
> 
> -** Life cycles of these two pairs: *add* and *del* are called repeatedly as
> +** Life cycle of these two pairs: *add* and *del* are called repeatedly as
>tasks switch in-and-out; *start* and *stop* is also called when the perf 
> core
>needs a quick stop-and-start, for instance, when the interrupt
> period is being
>adjusted.
> 
> -Current implementation is sufficient for now and can be easily extend to
> +Current implementation is sufficient for now and can be easily
> extended with new
>  features in the future.
> 
>  A. Related Structures
>  -
> 
> -* struct pmu: include/linux/perf_events.h
> -* struct riscv_pmu: arch/riscv/include/asm/perf_events.h
> +* struct pmu: include/linux/perf_event.h
> +* 

Re: [PATCH V2 2/3] perf/x86/intel/bm.c: Add Intel Branch Monitoring support

2018-04-04 Thread Megha Dey
On Wed, 2017-12-20 at 13:23 -0800, Megha Dey wrote:
> On Wed, 2017-12-13 at 08:21 +0100, Peter Zijlstra wrote:
> > On Tue, Dec 12, 2017 at 03:08:00PM -0800, Megha Dey wrote:
> > > > 
> > > > There's work on the way to allow multiple HW PMUs. You'll either have to
> > > > wait for that or help in making that happen. What you do not do is
> > > > silently hack around it.
> > > 
> > > Could I get a pointer to the code implementing this?
> > > 
> > 
> > There isn't much code now; but it could be build on top of the stuff
> > here:
> > 
> >   git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git perf/core
> > 
> > It was mostly Mark I think who wanted this for big litte stuffs.
> 
> Could you give us an estimate on the amount of time it could take to
> implement this?
> 
> I am not sure what the current status is or if Mark has been working on
> it. 
> 

Hi Peter,

Is there anyone currently working on this?


--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH] trace_uprobe: trace_uprobe_mmap() can be static

2018-04-04 Thread kbuild test robot

Fixes: d8d4d3603b92 ("trace_uprobe: Support SDT markers having reference count 
(semaphore)")
Signed-off-by: Fengguang Wu 
---
 trace_uprobe.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index 2502bd7..49a8673 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -998,7 +998,7 @@ static void sdt_increment_ref_ctr(struct trace_uprobe *tu)
 }
 
 /* Called with down_write(>vm_mm->mmap_sem) */
-void trace_uprobe_mmap(struct vm_area_struct *vma)
+static void trace_uprobe_mmap(struct vm_area_struct *vma)
 {
struct trace_uprobe *tu;
unsigned long vaddr;
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 6/9] trace_uprobe: Support SDT markers having reference count (semaphore)

2018-04-04 Thread kbuild test robot
Hi Ravi,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on tip/perf/core]
[also build test WARNING on v4.16 next-20180404]
[if your patch is applied to the wrong git tree, please drop us a note to help 
improve the system]

url:
https://github.com/0day-ci/linux/commits/Ravi-Bangoria/trace_uprobe-Support-SDT-markers-having-reference-count-semaphore/20180404-201900
reproduce:
# apt-get install sparse
make ARCH=x86_64 allmodconfig
make C=1 CF=-D__CHECK_ENDIAN__


sparse warnings: (new ones prefixed by >>)

   kernel/trace/trace.h:1298:38: sparse: incorrect type in argument 1 
(different address spaces) @@expected struct event_filter *filter @@got 
struct event_filtstruct event_filter *filter @@
   kernel/trace/trace.h:1298:38:expected struct event_filter *filter
   kernel/trace/trace.h:1298:38:got struct event_filter [noderef] 
*filter
>> kernel/trace/trace_uprobe.c:1001:6: sparse: symbol 'trace_uprobe_mmap' was 
>> not declared. Should it be static?

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 7/9] trace_uprobe/sdt: Fix multiple update of same reference counter

2018-04-04 Thread kbuild test robot
Hi Ravi,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on tip/perf/core]
[also build test ERROR on v4.16 next-20180403]
[if your patch is applied to the wrong git tree, please drop us a note to help 
improve the system]

url:
https://github.com/0day-ci/linux/commits/Ravi-Bangoria/trace_uprobe-Support-SDT-markers-having-reference-count-semaphore/20180404-201900
config: i386-randconfig-a0-201813 (attached as .config)
compiler: gcc-4.9 (Debian 4.9.4-2) 4.9.4
reproduce:
# save the attached .config to linux build tree
make ARCH=i386 

All errors (new ones prefixed by >>):

   kernel//trace/trace_uprobe.c:947:21: error: variable 'sdt_mmu_notifier_ops' 
has initializer but incomplete type
static const struct mmu_notifier_ops sdt_mmu_notifier_ops = {
^
>> kernel//trace/trace_uprobe.c:948:2: error: unknown field 'release' specified 
>> in initializer
 .release = sdt_mm_release,
 ^
   kernel//trace/trace_uprobe.c:948:2: warning: excess elements in struct 
initializer
   kernel//trace/trace_uprobe.c:948:2: warning: (near initialization for 
'sdt_mmu_notifier_ops')
   kernel//trace/trace_uprobe.c: In function 'sdt_add_mm_list':
>> kernel//trace/trace_uprobe.c:962:22: error: dereferencing pointer to 
>> incomplete type
 mn = kzalloc(sizeof(*mn), GFP_KERNEL);
 ^
   kernel//trace/trace_uprobe.c:966:4: error: dereferencing pointer to 
incomplete type
 mn->ops = _mmu_notifier_ops;
   ^
>> kernel//trace/trace_uprobe.c:967:2: error: implicit declaration of function 
>> '__mmu_notifier_register' [-Werror=implicit-function-declaration]
 __mmu_notifier_register(mn, mm);
 ^
   cc1: some warnings being treated as errors

vim +/__mmu_notifier_register +967 kernel//trace/trace_uprobe.c

   946  
 > 947  static const struct mmu_notifier_ops sdt_mmu_notifier_ops = {
 > 948  .release = sdt_mm_release,
   949  };
   950  
   951  static void sdt_add_mm_list(struct trace_uprobe *tu, struct mm_struct 
*mm)
   952  {
   953  struct mmu_notifier *mn;
   954  struct sdt_mm_list *sml = kzalloc(sizeof(*sml), GFP_KERNEL);
   955  
   956  if (!sml)
   957  return;
   958  sml->mm = mm;
   959  list_add(&(sml->list), &(tu->sml.list));
   960  
   961  /* Register mmu_notifier for this mm. */
 > 962  mn = kzalloc(sizeof(*mn), GFP_KERNEL);
   963  if (!mn)
   964  return;
   965  
   966  mn->ops = _mmu_notifier_ops;
 > 967  __mmu_notifier_register(mn, mm);
   968  }
   969  

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


.config.gz
Description: application/gzip


Re: [RFC PATCH 0/3] Documentation/features: Provide and apply "features-refresh.sh"

2018-04-04 Thread Andrea Parri
On Wed, Apr 04, 2018 at 06:56:32AM +0200, Ingo Molnar wrote:
> 
> * Andrea Parri  wrote:
> 
> > In Ingo's words [1]:
> > 
> >   "[...]  what should be done instead is to write a script that refreshes
> >all the arch-support.txt files in-place. [...]
> > 
> >It's OK for the script to have various quirks for weirdly implemented
> >features and exceptions: i.e. basically whenever it gets a feature wrong,
> >we can just tweak the script with quirks to make it all work out of box.
> > 
> >[...]  But in the end there should only be a single new script:
> > 
> >  Documentation/features/scripts/features-refresh.sh
> > 
> >... which operates on the arch-support.txt files and refreshes them in
> >place, and which, after all the refreshes have been committed, should
> >produce an empty 'git diff' result."
> > 
> >   "[...]  New features can then be added by basically just creating a
> >header-only arch-support.txt file, such as:
> > 
> >  triton:~/tip/Documentation/features> cat foo/bar/arch-support.txt
> >  #
> >  # Feature name:  shiny new fubar kernel feature
> >  # Kconfig:   ARCH_USE_FUBAR
> >  # description:   arch supports the fubar feature
> >  #
> > 
> >And running Documentation/features/scripts/features-refresh.sh would
> >auto-generate the arch support matrix. [...]
> >  
> >This way we soft- decouple the refreshing of the entries from the
> >introduction of the features, while still making it all easy to keep
> >sync and to extend."
> > 
> > This RFC presents a first attempt to implement such a feature/script, and
> > applies it script on top of Arnd's:
> > 
> >   git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic.git 
> > arch-removal
> > 
> > Patch 1/3 provides the "features-refresh.sh" script.  Patch 2/3 removes the
> > "BPF-JIT" feature file and it creates header-only files for "cBPF-JIT" and
> > "eBPF-JIT".  Patch 3/3 presents the results of running the script; this run
> > also printed to standard output the following warnings:
> > 
> >   WARNING: '__HAVE_ARCH_STRNCASECMP' is not a valid Kconfig
> >   WARNING: 'Optimized asm/rwsem.h' is not a valid Kconfig
> >   WARNING: '!ARCH_USES_GETTIMEOFFSET' is not a valid Kconfig
> >   WARNING: '__HAVE_ARCH_PTE_SPECIAL' is not a valid Kconfig
> > 
> > (I'm sending these patches with empty commit messagges, for early feedback:
> >  I'll fill in these messages in subsequent versions if this makes sense...)
> > 
> > Cheers,
> >   Andrea
> > 
> > Andrea Parri (3):
> >   Documentation/features: Add script that refreshes the arch support status 
> > files in place
> >   Documentation/features/core: Add arch support status files for 'cBPF-JIT' 
> > and 'eBPF-JIT'
> >   Documentation/features: Refresh and auto-generate the arch support status 
> > files in place
> 
> Ok, this series is really impressive at its RFC stage already!
> 
> Beyond fixing those warnings, I'd also suggest another change: please make 
> the 
> new BPF features patch the last one, so that the 'refresh' patch shows how 
> much 
> original bit-rot we gathered recently.
> 
> The 'new features' patch should then also include the result of also running 
> the 
> script, i.e. a single patch adding the base fields and the generated parts as 
> well. That will be the usual development flow anyway - people won't do 
> two-part 
> patches just to show which bits are by hand and which are auto-generated.

Yes, I'll do.

Let me ask some hints about the warnings, as I'm not sure how to 'fix' them;
we have:

  a) __HAVE_ARCH_STRNCASECMP
 __HAVE_ARCH_PTE_SPECIAL

  b) Optimized asm/rwsem.h

  c) !ARCH_USES_GETTIMEOFFSET  

For (c), I see two options:

  1. replace that with 'ARCH_USES_GETTIMEOFFSET' (and update the status
 matrix accordingly)

  2. add logics/code to the script to handle simple boolean expressions
 (mmh, this could get nasty really soon... let's say: limiting to a
 leading '!', to start with ;)

For (a), I realize that 'grep-ing' the macros in arch-specific _sources_
does serve the purpose of producing the hard-coded status matrices; but
is this a reasonable approach? (e.g., can produce 'false-positives'?)

What could it be a suitable solution for (b)? are there Kconfig options
which I could in place of that expression? some other suggestion?

Thanks,
  Andrea


> 
> Thanks,
> 
>   Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 7/9] trace_uprobe/sdt: Fix multiple update of same reference counter

2018-04-04 Thread Ravi Bangoria
When virtual memory map for binary/library is being prepared, there is
no direct one to one mapping between mmap() and virtual memory area. Ex,
when loader loads the library, it first calls mmap(size = total_size),
where total_size is addition of size of all elf sections that are going
to be mapped. Then it splits individual vmas with new mmap()/mprotect()
calls. Loader does this to ensure it gets continuous address range for
a library. load_elf_binary() also uses similar tricks while preparing
mappings of binary.

Ex for pyhton library,

  # strace -o out python
mmap(NULL, 2738968, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 
0x7fff9246
mmap(0x7fff926a, 327680, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x23) = 0x7fff926a
mprotect(0x7fff926a, 65536, PROT_READ) = 0

Here, the first mmap() maps the whole library into one region. Second
mmap() and third mprotect() split out the whole region into smaller
vmas and sets appropriate protection flags.

Now, in this case, trace_uprobe_mmap_callback() update the reference
counter twice -- by second mmap() call and by third mprotect() call --
because both regions contain reference counter.

But while de-registration, reference counter will get decremented only
by once leaving reference counter > 0 even if no one is tracing on that
marker.

Example with python library before patch:

# readelf -n /lib64/libpython2.7.so.1.0 | grep -A1 function__entry
  Name: function__entry
  ... Semaphore: 0x002899d8

  Probe on a marker:
# echo "p:sdt_python/function__entry 
/usr/lib64/libpython2.7.so.1.0:0x16a4d4(0x2799d8)" > uprobe_events

  Start tracing:
# perf record -e sdt_python:function__entry -a

  Run python workload:
# python
# cat /proc/`pgrep python`/maps | grep libpython
  7fffadb0-7fffadd4 r-xp  08:05 403934  
/usr/lib64/libpython2.7.so.1.0
  7fffadd4-7fffadd5 r--p 0023 08:05 403934  
/usr/lib64/libpython2.7.so.1.0
  7fffadd5-7fffadd9 rw-p 0024 08:05 403934  
/usr/lib64/libpython2.7.so.1.0

  Reference counter value has been incremented twice:
# dd if=/proc/`pgrep python`/mem bs=1 count=1 skip=$(( 0x7fffadd899d8 )) 
2>/dev/null | xxd
  000: 02   .

  Kill perf:
#
  ^C[ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 0.322 MB perf.data (1273 samples) ]

  Reference conter is still 1 even when no one is tracing on it:
# dd if=/proc/`pgrep python`/mem bs=1 count=1 skip=$(( 0x7fffadd899d8 )) 
2>/dev/null | xxd
  000: 01   .

Ensure increment and decrement happens in sync by keeping list of mms
in trace_uprobe. Check presence of mm in the list before incrementing
the reference counter. I.e. for each {trace_uprobe,mm} tuple, reference
counter must be incremented only by one. Note that we don't check the
presence of mm in the list at decrement time.

We consider only two case while incrementing the reference counter:
  1. Target binary is already running when we start tracing. In this
 case, find all mm which maps region of target binary containing
 reference counter. Loop over all mms and increment the counter
 if mm is not already present in the list.
  2. Tracer is already tracing before target binary starts execution.
 In this case, all mmap(vma) gets notified to trace_uprobe.
 Trace_uprobe will update reference counter if vma->vm_mm is not
 already present in the list.

  There is also a third case which we don't consider, a fork() case.
  When process with markers forks itself, we don't explicitly increment
  the reference counter in child process because it should be taken care
  by dup_mmap(). We also don't add the child mm in the list. This is
  fine because we don't check presence of mm in the list at decrement
  time.

After patch:

  Start perf record and then run python...
  Reference counter value has been incremented only once:
# dd if=/proc/`pgrep python`/mem bs=1 count=1 skip=$(( 0x7fff9cbf99d8 )) 
2>/dev/null | xxd
  000: 01   .

  Kill perf:
#
  ^C[ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 0.364 MB perf.data (1427 samples) ]

  Reference conter is reset to 0:
# dd if=/proc/`pgrep python`/mem bs=1 count=1 skip=$(( 0x7fff9cbb99d8 )) 
2>/dev/null | xxd
  000: 00   .

Signed-off-by: Ravi Bangoria 
---
 kernel/trace/trace_uprobe.c | 105 ++--
 1 file changed, 102 insertions(+), 3 deletions(-)

diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index 5582c2d..c045174 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -27,6 +27,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 

[PATCH v2 8/9] trace_uprobe/sdt: Document about reference counter

2018-04-04 Thread Ravi Bangoria
Reference counter gate the invocation of probe. If present,
by default reference count is 0. Kernel needs to increment
it before tracing the probe and decrement it when done. This
is identical to semaphore in Userspace Statically Defined
Tracepoints (USDT).

Document usage of reference counter.

Signed-off-by: Ravi Bangoria 
---
 Documentation/trace/uprobetracer.txt | 16 +---
 kernel/trace/trace.c |  2 +-
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/Documentation/trace/uprobetracer.txt 
b/Documentation/trace/uprobetracer.txt
index bf526a7c..cb6751d 100644
--- a/Documentation/trace/uprobetracer.txt
+++ b/Documentation/trace/uprobetracer.txt
@@ -19,15 +19,25 @@ user to calculate the offset of the probepoint in the 
object.
 
 Synopsis of uprobe_tracer
 -
-  p[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS] : Set a uprobe
-  r[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS] : Set a return uprobe (uretprobe)
-  -:[GRP/]EVENT   : Clear uprobe or uretprobe event
+  p[:[GRP/]EVENT] PATH:OFFSET[(REF_CTR_OFFSET)] [FETCHARGS]
+  r[:[GRP/]EVENT] PATH:OFFSET[(REF_CTR_OFFSET)] [FETCHARGS]
+  -:[GRP/]EVENT
+
+  p : Set a uprobe
+  r : Set a return uprobe (uretprobe)
+  - : Clear uprobe or uretprobe event
 
   GRP   : Group name. If omitted, "uprobes" is the default value.
   EVENT : Event name. If omitted, the event name is generated based
   on PATH+OFFSET.
   PATH  : Path to an executable or a library.
   OFFSET: Offset where the probe is inserted.
+  REF_CTR_OFFSET: Reference counter offset. Optional field. Reference count
+ gate the invocation of probe. If present, by default
+ reference count is 0. Kernel needs to increment it before
+ tracing the probe and decrement it when done. This is
+ identical to semaphore in Userspace Statically Defined
+ Tracepoints (USDT).
 
   FETCHARGS : Arguments. Each probe can have up to 128 args.
%REG : Fetch register REG
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 300f4ea..d211937 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4604,7 +4604,7 @@ static int tracing_trace_options_open(struct inode 
*inode, struct file *file)
   "place (kretprobe): [:][+]|\n"
 #endif
 #ifdef CONFIG_UPROBE_EVENTS
-   "\tplace: :\n"
+  "   place (uprobe): :[(ref_ctr_offset)]\n"
 #endif
"\t args: =fetcharg[:type]\n"
"\t fetcharg: %, @, @[+|-],\n"
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 9/9] perf probe: Support SDT markers having reference counter (semaphore)

2018-04-04 Thread Ravi Bangoria
With this, perf buildid-cache will save SDT markers with reference
counter in probe cache. Perf probe will be able to probe markers
having reference counter. Ex,

  # readelf -n /tmp/tick | grep -A1 loop2
Name: loop2
... Semaphore: 0x10020036

  # ./perf buildid-cache --add /tmp/tick
  # ./perf probe sdt_tick:loop2
  # ./perf stat -e sdt_tick:loop2 /tmp/tick
hi: 0
hi: 1
hi: 2
^C
 Performance counter stats for '/tmp/tick':
 3  sdt_tick:loop2
   2.561851452 seconds time elapsed

Signed-off-by: Ravi Bangoria 
---
 tools/perf/util/probe-event.c | 18 ++---
 tools/perf/util/probe-event.h |  1 +
 tools/perf/util/probe-file.c  | 34 ++--
 tools/perf/util/probe-file.h  |  1 +
 tools/perf/util/symbol-elf.c  | 46 ---
 tools/perf/util/symbol.h  |  7 +++
 6 files changed, 86 insertions(+), 21 deletions(-)

diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index e1dbc98..b3a1330 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -1832,6 +1832,12 @@ int parse_probe_trace_command(const char *cmd, struct 
probe_trace_event *tev)
tp->offset = strtoul(fmt2_str, NULL, 10);
}
 
+   if (tev->uprobes) {
+   fmt2_str = strchr(p, '(');
+   if (fmt2_str)
+   tp->ref_ctr_offset = strtoul(fmt2_str + 1, NULL, 0);
+   }
+
tev->nargs = argc - 2;
tev->args = zalloc(sizeof(struct probe_trace_arg) * tev->nargs);
if (tev->args == NULL) {
@@ -2054,15 +2060,21 @@ char *synthesize_probe_trace_command(struct 
probe_trace_event *tev)
}
 
/* Use the tp->address for uprobes */
-   if (tev->uprobes)
+   if (tev->uprobes) {
err = strbuf_addf(, "%s:0x%lx", tp->module, tp->address);
-   else if (!strncmp(tp->symbol, "0x", 2))
+   if (uprobe_ref_ctr_is_supported() &&
+   tp->ref_ctr_offset &&
+   err >= 0)
+   err = strbuf_addf(, "(0x%lx)", tp->ref_ctr_offset);
+   } else if (!strncmp(tp->symbol, "0x", 2)) {
/* Absolute address. See try_to_find_absolute_address() */
err = strbuf_addf(, "%s%s0x%lx", tp->module ?: "",
  tp->module ? ":" : "", tp->address);
-   else
+   } else {
err = strbuf_addf(, "%s%s%s+%lu", tp->module ?: "",
tp->module ? ":" : "", tp->symbol, tp->offset);
+   }
+
if (err)
goto error;
 
diff --git a/tools/perf/util/probe-event.h b/tools/perf/util/probe-event.h
index 45b14f0..15a98c3 100644
--- a/tools/perf/util/probe-event.h
+++ b/tools/perf/util/probe-event.h
@@ -27,6 +27,7 @@ struct probe_trace_point {
char*symbol;/* Base symbol */
char*module;/* Module name */
unsigned long   offset; /* Offset from symbol */
+   unsigned long   ref_ctr_offset; /* SDT reference counter offset */
unsigned long   address;/* Actual address of the trace point */
boolretprobe;   /* Return probe flag */
 };
diff --git a/tools/perf/util/probe-file.c b/tools/perf/util/probe-file.c
index 4ae1123..ca0e524 100644
--- a/tools/perf/util/probe-file.c
+++ b/tools/perf/util/probe-file.c
@@ -697,8 +697,16 @@ int probe_cache__add_entry(struct probe_cache *pcache,
 #ifdef HAVE_GELF_GETNOTE_SUPPORT
 static unsigned long long sdt_note__get_addr(struct sdt_note *note)
 {
-   return note->bit32 ? (unsigned long long)note->addr.a32[0]
-: (unsigned long long)note->addr.a64[0];
+   return note->bit32 ?
+   (unsigned long long)note->addr.a32[SDT_NOTE_IDX_LOC] :
+   (unsigned long long)note->addr.a64[SDT_NOTE_IDX_LOC];
+}
+
+static unsigned long long sdt_note__get_ref_ctr_offset(struct sdt_note *note)
+{
+   return note->bit32 ?
+   (unsigned long long)note->addr.a32[SDT_NOTE_IDX_REFCTR] :
+   (unsigned long long)note->addr.a64[SDT_NOTE_IDX_REFCTR];
 }
 
 static const char * const type_to_suffix[] = {
@@ -776,14 +784,21 @@ static char *synthesize_sdt_probe_command(struct sdt_note 
*note,
 {
struct strbuf buf;
char *ret = NULL, **args;
-   int i, args_count;
+   int i, args_count, err;
+   unsigned long long ref_ctr_offset;
 
if (strbuf_init(, 32) < 0)
return NULL;
 
-   if (strbuf_addf(, "p:%s/%s %s:0x%llx",
-   sdtgrp, note->name, pathname,
-   sdt_note__get_addr(note)) < 0)
+   err = strbuf_addf(, "p:%s/%s %s:0x%llx",
+   sdtgrp, note->name, pathname,
+   sdt_note__get_addr(note));
+
+   ref_ctr_offset = 

[PATCH v2 0/9] trace_uprobe: Support SDT markers having reference count (semaphore)

2018-04-04 Thread Ravi Bangoria
Userspace Statically Defined Tracepoints[1] are dtrace style markers
inside userspace applications. Applications like PostgreSQL, MySQL,
Pthread, Perl, Python, Java, Ruby, Node.js, libvirt, QEMU, glib etc
have these markers embedded in them. These markers are added by developer
at important places in the code. Each marker source expands to a single
nop instruction in the compiled code but there may be additional
overhead for computing the marker arguments which expands to couple of
instructions. In case the overhead is more, execution of it can be
omitted by runtime if() condition when no one is tracing on the marker:

if (reference_counter > 0) {
Execute marker instructions;
}   

Default value of reference counter is 0. Tracer has to increment the 
reference counter before tracing on a marker and decrement it when
done with the tracing.

Currently, perf tool has limited supports for SDT markers. I.e. it
can not trace markers surrounded by reference counter. Also, it's
not easy to add reference counter logic in userspace tool like perf,
so basic idea for this patchset is to add reference counter logic in
the trace_uprobe infrastructure. Ex,[2]

  # cat tick.c
... 
for (i = 0; i < 100; i++) {
DTRACE_PROBE1(tick, loop1, i);
if (TICK_LOOP2_ENABLED()) {
DTRACE_PROBE1(tick, loop2, i); 
}
printf("hi: %d\n", i); 
sleep(1);
}   
... 

Here tick:loop1 is marker without reference counter where as tick:loop2
is surrounded by reference counter condition.

  # perf buildid-cache --add /tmp/tick
  # perf probe sdt_tick:loop1
  # perf probe sdt_tick:loop2

  # perf stat -e sdt_tick:loop1,sdt_tick:loop2 -- /tmp/tick
  hi: 0
  hi: 1
  hi: 2
  ^C
  Performance counter stats for '/tmp/tick':
 3  sdt_tick:loop1
 0  sdt_tick:loop2
 2.747086086 seconds time elapsed

Perf failed to record data for tick:loop2. Same experiment with this
patch series:

  # ./perf buildid-cache --add /tmp/tick
  # ./perf probe sdt_tick:loop2
  # ./perf stat -e sdt_tick:loop2 /tmp/tick
hi: 0
hi: 1
hi: 2
^C  
 Performance counter stats for '/tmp/tick':
 3  sdt_tick:loop2
   2.561851452 seconds time elapsed


Note:
 - 'reference counter' is called as 'semaphore' in original Dtrace
   (or Systemtap, bcc and even in ELF) documentation and code. But the 
   term 'semaphore' is misleading in this context. This is just a counter
   used to hold number of tracers tracing on a marker. This is not really
   used for any synchronization. So we are referring it as 'reference
   counter' in kernel / perf code.


v2 changes:
 - [PATCH v2 3/9] is new. build_map_info() has a side effect. One has
   to perform mmput() when he is done with the mm. Let free_map_info()
   take care of mmput() so that one does not need to worry about it.
 - [PATCH v2 6/9] sdt_update_ref_ctr(). No need to use memcpy().
   Reference counter can be directly updated using normal assignment.
 - [PATCH v2 6/9] Check valid vma is returned by sdt_find_vma() before
   incrementing / decrementing a reference counter.
 - [PATCH v2 6/9] Introduce utility functions for taking write lock on
   dup_mmap_sem. Use these functions in trace_uprobe to avoide race with
   fork / dup_mmap().
 - [PATCH v2 6/9] Don't check presence of mm in tu->sml at decrement
   time. Purpose of maintaining the list is to ensure increment happen
   only once for each {trace_uprobe,mm} tuple.
 - [PATCH v2 7/9] v1 was not removing mm from tu->sml when process
   exits and tracing is still on. This leads to a problem if same
   address gets used by new mm. Use mmu_notifier to remove such mm
   from the list. This guarantees that all mm which has been added
   to tu->sml will be removed from list either when tracing ends or
   when process goes away.
 - [PATCH v2 7/9] Patch description was misleading. Change it. Add
   more generic python example.
 - [PATCH v2 7/9] Convert sml_rw_sem into mutex sml_lock.
 - [PATCH v2 7/9] Use builtin linked list in sdt_mm_list instead of
   defining it's own pointer chain.
 - Change the order of last two patches.
 - [PATCH v2 9/9] Check availability of ref_ctr_offset support by
   trace_uprobe infrastructure before using it. This ensures newer
   perf tool will still work on older kernels which does not support
   trace_uprobe with reference counter.
 - Other changes as suggested by Masami, Oleg and Steve.

v1 can be found at:
  https://lkml.org/lkml/2018/3/13/432

[1] https://sourceware.org/systemtap/wiki/UserSpaceProbeImplementation
[2] https://github.com/iovisor/bcc/issues/327#issuecomment-200576506
[3] https://lkml.org/lkml/2017/12/6/976


Oleg Nesterov (1):
  Uprobe: Move mmput() into free_map_info()

Ravi Bangoria (8):
  Uprobe: Export vaddr <-> offset conversion functions
  mm: Prefix vma_ to vaddr_to_offset() and offset_to_vaddr()
  Uprobe: Rename map_info to uprobe_map_info
  Uprobe: Export uprobe_map_info along with

[PATCH v2 4/9] Uprobe: Rename map_info to uprobe_map_info

2018-04-04 Thread Ravi Bangoria
map_info is very generic name, rename it to uprobe_map_info.
Renaming will help to export this structure outside of the
file.

Also rename free_map_info() to uprobe_free_map_info() and
build_map_info() to uprobe_build_map_info().

Signed-off-by: Ravi Bangoria 
Reviewed-by: Jérôme Glisse 
---
 kernel/events/uprobes.c | 30 --
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 1d439c7..477dc42 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -695,28 +695,30 @@ static void delete_uprobe(struct uprobe *uprobe)
put_uprobe(uprobe);
 }
 
-struct map_info {
-   struct map_info *next;
+struct uprobe_map_info {
+   struct uprobe_map_info *next;
struct mm_struct *mm;
unsigned long vaddr;
 };
 
-static inline struct map_info *free_map_info(struct map_info *info)
+static inline struct uprobe_map_info *
+uprobe_free_map_info(struct uprobe_map_info *info)
 {
-   struct map_info *next = info->next;
+   struct uprobe_map_info *next = info->next;
mmput(info->mm);
kfree(info);
return next;
 }
 
-static struct map_info *
-build_map_info(struct address_space *mapping, loff_t offset, bool is_register)
+static struct uprobe_map_info *
+uprobe_build_map_info(struct address_space *mapping, loff_t offset,
+ bool is_register)
 {
unsigned long pgoff = offset >> PAGE_SHIFT;
struct vm_area_struct *vma;
-   struct map_info *curr = NULL;
-   struct map_info *prev = NULL;
-   struct map_info *info;
+   struct uprobe_map_info *curr = NULL;
+   struct uprobe_map_info *prev = NULL;
+   struct uprobe_map_info *info;
int more = 0;
 
  again:
@@ -730,7 +732,7 @@ static inline struct map_info *free_map_info(struct 
map_info *info)
 * Needs GFP_NOWAIT to avoid i_mmap_rwsem recursion 
through
 * reclaim. This is optimistic, no harm done if it 
fails.
 */
-   prev = kmalloc(sizeof(struct map_info),
+   prev = kmalloc(sizeof(struct uprobe_map_info),
GFP_NOWAIT | __GFP_NOMEMALLOC | 
__GFP_NOWARN);
if (prev)
prev->next = NULL;
@@ -763,7 +765,7 @@ static inline struct map_info *free_map_info(struct 
map_info *info)
}
 
do {
-   info = kmalloc(sizeof(struct map_info), GFP_KERNEL);
+   info = kmalloc(sizeof(struct uprobe_map_info), GFP_KERNEL);
if (!info) {
curr = ERR_PTR(-ENOMEM);
goto out;
@@ -786,11 +788,11 @@ static inline struct map_info *free_map_info(struct 
map_info *info)
 register_for_each_vma(struct uprobe *uprobe, struct uprobe_consumer *new)
 {
bool is_register = !!new;
-   struct map_info *info;
+   struct uprobe_map_info *info;
int err = 0;
 
percpu_down_write(_mmap_sem);
-   info = build_map_info(uprobe->inode->i_mapping,
+   info = uprobe_build_map_info(uprobe->inode->i_mapping,
uprobe->offset, is_register);
if (IS_ERR(info)) {
err = PTR_ERR(info);
@@ -828,7 +830,7 @@ static inline struct map_info *free_map_info(struct 
map_info *info)
  unlock:
up_write(>mmap_sem);
  free:
-   info = free_map_info(info);
+   info = uprobe_free_map_info(info);
}
  out:
percpu_up_write(_mmap_sem);
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 3/9] Uprobe: Move mmput() into free_map_info()

2018-04-04 Thread Ravi Bangoria
From: Oleg Nesterov 

build_map_info() has a side effect like one need to perform
mmput() when done with the mm. Add mmput() in free_map_info()
so that user does not have to call it explicitly.

Signed-off-by: Oleg Nesterov 
Signed-off-by: Ravi Bangoria 
---
 kernel/events/uprobes.c | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 535fd39..1d439c7 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -704,6 +704,7 @@ struct map_info {
 static inline struct map_info *free_map_info(struct map_info *info)
 {
struct map_info *next = info->next;
+   mmput(info->mm);
kfree(info);
return next;
 }
@@ -773,8 +774,11 @@ static inline struct map_info *free_map_info(struct 
map_info *info)
 
goto again;
  out:
-   while (prev)
-   prev = free_map_info(prev);
+   while (prev) {
+   info = prev;
+   prev = prev->next;
+   kfree(info);
+   }
return curr;
 }
 
@@ -824,7 +828,6 @@ static inline struct map_info *free_map_info(struct 
map_info *info)
  unlock:
up_write(>mmap_sem);
  free:
-   mmput(mm);
info = free_map_info(info);
}
  out:
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 1/9] Uprobe: Export vaddr <-> offset conversion functions

2018-04-04 Thread Ravi Bangoria
These are generic functions which operates on file offset
and virtual address. Make these functions available outside
of uprobe code so that other can use it as well.

Signed-off-by: Ravi Bangoria 
Reviewed-by: Jérôme Glisse 
---
 include/linux/mm.h  | 12 
 kernel/events/uprobes.c | 10 --
 2 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ad06d42..95909f2 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2274,6 +2274,18 @@ struct vm_unmapped_area_info {
return unmapped_area(info);
 }
 
+static inline unsigned long
+offset_to_vaddr(struct vm_area_struct *vma, loff_t offset)
+{
+   return vma->vm_start + offset - ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
+}
+
+static inline loff_t
+vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr)
+{
+   return ((loff_t)vma->vm_pgoff << PAGE_SHIFT) + (vaddr - vma->vm_start);
+}
+
 /* truncate.c */
 extern void truncate_inode_pages(struct address_space *, loff_t);
 extern void truncate_inode_pages_range(struct address_space *,
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index ce6848e..bd6f230 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -130,16 +130,6 @@ static bool valid_vma(struct vm_area_struct *vma, bool 
is_register)
return vma->vm_file && (vma->vm_flags & flags) == VM_MAYEXEC;
 }
 
-static unsigned long offset_to_vaddr(struct vm_area_struct *vma, loff_t offset)
-{
-   return vma->vm_start + offset - ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
-}
-
-static loff_t vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr)
-{
-   return ((loff_t)vma->vm_pgoff << PAGE_SHIFT) + (vaddr - vma->vm_start);
-}
-
 /**
  * __replace_page - replace page in vma by new page.
  * based on replace_page in mm/ksm.c
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 6/9] trace_uprobe: Support SDT markers having reference count (semaphore)

2018-04-04 Thread Ravi Bangoria
Userspace Statically Defined Tracepoints[1] are dtrace style markers
inside userspace applications. These markers are added by developer at
important places in the code. Each marker source expands to a single
nop instruction in the compiled code but there may be additional
overhead for computing the marker arguments which expands to couple of
instructions. In case the overhead is more, execution of it can be
ommited by runtime if() condition when no one is tracing on the marker:

if (reference_counter > 0) {
Execute marker instructions;
}

Default value of reference counter is 0. Tracer has to increment the
reference counter before tracing on a marker and decrement it when
done with the tracing.

Implement the reference counter logic in trace_uprobe, leaving core
uprobe infrastructure as is, except one new callback from uprobe_mmap()
to trace_uprobe.

trace_uprobe definition with reference counter will now be:

  :[(ref_ctr_offset)]

There are two different cases while enabling the marker,
 1. Trace existing process. In this case, find all suitable processes
and increment the reference counter in them.
 2. Enable trace before running target binary. In this case, all mmaps
will get notified to trace_uprobe and trace_uprobe will increment
the reference counter if corresponding uprobe is enabled.

At the time of disabling probes, decrement reference counter in all
existing target processes.

[1] https://sourceware.org/systemtap/wiki/UserSpaceProbeImplementation

Note: 'reference counter' is called as 'semaphore' in original Dtrace
(or Systemtap, bcc and even in ELF) documentation and code. But the
term 'semaphore' is misleading in this context. This is just a counter
used to hold number of tracers tracing on a marker. This is not really
used for any synchronization. So we are referring it as 'reference
counter' in kernel / perf code.

Signed-off-by: Ravi Bangoria 
Signed-off-by: Fengguang Wu 
[Fengguang reported/fixed build failure in RFC patch]
---
 include/linux/uprobes.h |  10 +++
 kernel/events/uprobes.c |  16 +
 kernel/trace/trace_uprobe.c | 162 +++-
 3 files changed, 186 insertions(+), 2 deletions(-)

diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
index 7bd2760..2db3ed1 100644
--- a/include/linux/uprobes.h
+++ b/include/linux/uprobes.h
@@ -122,6 +122,8 @@ struct uprobe_map_info {
unsigned long vaddr;
 };
 
+extern void (*uprobe_mmap_callback)(struct vm_area_struct *vma);
+
 extern int set_swbp(struct arch_uprobe *aup, struct mm_struct *mm, unsigned 
long vaddr);
 extern int set_orig_insn(struct arch_uprobe *aup, struct mm_struct *mm, 
unsigned long vaddr);
 extern bool is_swbp_insn(uprobe_opcode_t *insn);
@@ -136,6 +138,8 @@ struct uprobe_map_info {
 extern void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, 
unsigned long end);
 extern void uprobe_start_dup_mmap(void);
 extern void uprobe_end_dup_mmap(void);
+extern void uprobe_down_write_dup_mmap(void);
+extern void uprobe_up_write_dup_mmap(void);
 extern void uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm);
 extern void uprobe_free_utask(struct task_struct *t);
 extern void uprobe_copy_process(struct task_struct *t, unsigned long flags);
@@ -192,6 +196,12 @@ static inline void uprobe_start_dup_mmap(void)
 static inline void uprobe_end_dup_mmap(void)
 {
 }
+static inline void uprobe_down_write_dup_mmap(void)
+{
+}
+static inline void uprobe_up_write_dup_mmap(void)
+{
+}
 static inline void
 uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm)
 {
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 096d1e6..c691334 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -1044,6 +1044,9 @@ static void build_probe_list(struct inode *inode,
spin_unlock(_treelock);
 }
 
+/* Rightnow the only user of this is trace_uprobe. */
+void (*uprobe_mmap_callback)(struct vm_area_struct *vma);
+
 /*
  * Called from mmap_region/vma_adjust with mm->mmap_sem acquired.
  *
@@ -1056,6 +1059,9 @@ int uprobe_mmap(struct vm_area_struct *vma)
struct uprobe *uprobe, *u;
struct inode *inode;
 
+   if (uprobe_mmap_callback)
+   uprobe_mmap_callback(vma);
+
if (no_uprobe_events() || !valid_vma(vma, true))
return 0;
 
@@ -1247,6 +1253,16 @@ void uprobe_end_dup_mmap(void)
percpu_up_read(_mmap_sem);
 }
 
+void uprobe_down_write_dup_mmap(void)
+{
+   percpu_down_write(_mmap_sem);
+}
+
+void uprobe_up_write_dup_mmap(void)
+{
+   percpu_up_write(_mmap_sem);
+}
+
 void uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm)
 {
if (test_bit(MMF_HAS_UPROBES, >flags)) {
diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index 2014f43..5582c2d 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -25,6 +25,8 @@
 

[PATCH v2 2/9] mm: Prefix vma_ to vaddr_to_offset() and offset_to_vaddr()

2018-04-04 Thread Ravi Bangoria
Make function names more meaningful by adding vma_ prefix
to them.

Signed-off-by: Ravi Bangoria 
Reviewed-by: Jérôme Glisse 
---
 include/linux/mm.h  |  4 ++--
 kernel/events/uprobes.c | 14 +++---
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 95909f2..d7ee526 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2275,13 +2275,13 @@ struct vm_unmapped_area_info {
 }
 
 static inline unsigned long
-offset_to_vaddr(struct vm_area_struct *vma, loff_t offset)
+vma_offset_to_vaddr(struct vm_area_struct *vma, loff_t offset)
 {
return vma->vm_start + offset - ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
 }
 
 static inline loff_t
-vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr)
+vma_vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr)
 {
return ((loff_t)vma->vm_pgoff << PAGE_SHIFT) + (vaddr - vma->vm_start);
 }
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index bd6f230..535fd39 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -748,7 +748,7 @@ static inline struct map_info *free_map_info(struct 
map_info *info)
curr = info;
 
info->mm = vma->vm_mm;
-   info->vaddr = offset_to_vaddr(vma, offset);
+   info->vaddr = vma_offset_to_vaddr(vma, offset);
}
i_mmap_unlock_read(mapping);
 
@@ -807,7 +807,7 @@ static inline struct map_info *free_map_info(struct 
map_info *info)
goto unlock;
 
if (vma->vm_start > info->vaddr ||
-   vaddr_to_offset(vma, info->vaddr) != uprobe->offset)
+   vma_vaddr_to_offset(vma, info->vaddr) != uprobe->offset)
goto unlock;
 
if (is_register) {
@@ -977,7 +977,7 @@ static int unapply_uprobe(struct uprobe *uprobe, struct 
mm_struct *mm)
uprobe->offset >= offset + vma->vm_end - vma->vm_start)
continue;
 
-   vaddr = offset_to_vaddr(vma, uprobe->offset);
+   vaddr = vma_offset_to_vaddr(vma, uprobe->offset);
err |= remove_breakpoint(uprobe, mm, vaddr);
}
up_read(>mmap_sem);
@@ -1023,7 +1023,7 @@ static void build_probe_list(struct inode *inode,
struct uprobe *u;
 
INIT_LIST_HEAD(head);
-   min = vaddr_to_offset(vma, start);
+   min = vma_vaddr_to_offset(vma, start);
max = min + (end - start) - 1;
 
spin_lock(_treelock);
@@ -1076,7 +1076,7 @@ int uprobe_mmap(struct vm_area_struct *vma)
list_for_each_entry_safe(uprobe, u, _list, pending_list) {
if (!fatal_signal_pending(current) &&
filter_chain(uprobe, UPROBE_FILTER_MMAP, vma->vm_mm)) {
-   unsigned long vaddr = offset_to_vaddr(vma, 
uprobe->offset);
+   unsigned long vaddr = vma_offset_to_vaddr(vma, 
uprobe->offset);
install_breakpoint(uprobe, vma->vm_mm, vma, vaddr);
}
put_uprobe(uprobe);
@@ -1095,7 +1095,7 @@ int uprobe_mmap(struct vm_area_struct *vma)
 
inode = file_inode(vma->vm_file);
 
-   min = vaddr_to_offset(vma, start);
+   min = vma_vaddr_to_offset(vma, start);
max = min + (end - start) - 1;
 
spin_lock(_treelock);
@@ -1730,7 +1730,7 @@ static struct uprobe *find_active_uprobe(unsigned long 
bp_vaddr, int *is_swbp)
if (vma && vma->vm_start <= bp_vaddr) {
if (valid_vma(vma, false)) {
struct inode *inode = file_inode(vma->vm_file);
-   loff_t offset = vaddr_to_offset(vma, bp_vaddr);
+   loff_t offset = vma_vaddr_to_offset(vma, bp_vaddr);
 
uprobe = find_uprobe(inode, offset);
}
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html