Re: [PATCH] KVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions

2014-12-17 Thread Alexander Graf


On 04.12.14 01:48, Suresh E. Warrier wrote:
> This patch adds trace points in the guest entry and exit code and also
> for exceptions handled by the host in kernel mode - hypercalls and page
> faults. The new events are added to /sys/kernel/debug/tracing/events
> under a new subsystem called kvm_hv.
> 
> Acked-by: Paul Mackerras 
> Signed-off-by: Suresh Warrier 

Thanks, applied to kvm-ppc-queue.


Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions

2014-12-01 Thread Suresh E. Warrier


On 11/20/2014 08:01 AM, Steven Rostedt wrote:
> On Thu, 20 Nov 2014 13:10:12 +0100
> Alexander Graf  wrote:
> 
>>
>>
>> On 20.11.14 11:40, Aneesh Kumar K.V wrote:
>>> "Suresh E. Warrier"  writes:
>>>
 This patch adds trace points in the guest entry and exit code and also
 for exceptions handled by the host in kernel mode - hypercalls and page
 faults. The new events are added to /sys/kernel/debug/tracing/events
 under a new subsystem called kvm_hv.
>>>
>>> 
>>>
/* Set this explicitly in case thread 0 doesn't have a vcpu */
 @@ -1687,6 +1691,9 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
  
vc->vcore_state = VCORE_RUNNING;
preempt_disable();
 +
 +  trace_kvmppc_run_core(vc, 0);
 +
spin_unlock(&vc->lock);
>>>
>>> Do we really want to call tracepoint with spin lock held ? Is that a good
>>> thing to do ?. 
>>
>> I thought it was safe to call tracepoints inside of spin lock regions?
>> Steve?
>>
> 
> There's tracepoints in the guts of the scheduler where rq lock is held.
> Don't worry about it. The tracing system is lockless.
> 

Thanks for confirming.

-suresh
 
> -- Steve
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions

2014-11-20 Thread Steven Rostedt
On Thu, 20 Nov 2014 13:10:12 +0100
Alexander Graf  wrote:

> 
> 
> On 20.11.14 11:40, Aneesh Kumar K.V wrote:
> > "Suresh E. Warrier"  writes:
> > 
> >> This patch adds trace points in the guest entry and exit code and also
> >> for exceptions handled by the host in kernel mode - hypercalls and page
> >> faults. The new events are added to /sys/kernel/debug/tracing/events
> >> under a new subsystem called kvm_hv.
> > 
> > 
> > 
> >>/* Set this explicitly in case thread 0 doesn't have a vcpu */
> >> @@ -1687,6 +1691,9 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
> >>  
> >>vc->vcore_state = VCORE_RUNNING;
> >>preempt_disable();
> >> +
> >> +  trace_kvmppc_run_core(vc, 0);
> >> +
> >>spin_unlock(&vc->lock);
> > 
> > Do we really want to call tracepoint with spin lock held ? Is that a good
> > thing to do ?. 
> 
> I thought it was safe to call tracepoints inside of spin lock regions?
> Steve?
> 

There's tracepoints in the guts of the scheduler where rq lock is held.
Don't worry about it. The tracing system is lockless.

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions

2014-11-20 Thread Alexander Graf


On 20.11.14 11:40, Aneesh Kumar K.V wrote:
> "Suresh E. Warrier"  writes:
> 
>> This patch adds trace points in the guest entry and exit code and also
>> for exceptions handled by the host in kernel mode - hypercalls and page
>> faults. The new events are added to /sys/kernel/debug/tracing/events
>> under a new subsystem called kvm_hv.
> 
> 
> 
>>  /* Set this explicitly in case thread 0 doesn't have a vcpu */
>> @@ -1687,6 +1691,9 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>>  
>>  vc->vcore_state = VCORE_RUNNING;
>>  preempt_disable();
>> +
>> +trace_kvmppc_run_core(vc, 0);
>> +
>>  spin_unlock(&vc->lock);
> 
> Do we really want to call tracepoint with spin lock held ? Is that a good
> thing to do ?. 

I thought it was safe to call tracepoints inside of spin lock regions?
Steve?


Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions

2014-11-20 Thread Alexander Graf


On 19.11.14 22:54, Suresh E. Warrier wrote:
> 
> 
> On 11/14/2014 04:56 AM, Alexander Graf wrote:
>>
>>
>>
>>> Am 14.11.2014 um 00:29 schrieb Suresh E. Warrier 
>>> :
>>>
>>> This patch adds trace points in the guest entry and exit code and also
>>> for exceptions handled by the host in kernel mode - hypercalls and page
>>> faults. The new events are added to /sys/kernel/debug/tracing/events
>>> under a new subsystem called kvm_hv.
>>>
>>> Acked-by: Paul Mackerras 
>>> Signed-off-by: Suresh Warrier 
>>> ---
>>> arch/powerpc/kvm/book3s_64_mmu_hv.c |  12 +-
>>> arch/powerpc/kvm/book3s_hv.c|  19 ++
>>> arch/powerpc/kvm/trace_hv.h | 497 
>>> 
>>> 3 files changed, 525 insertions(+), 3 deletions(-)
>>> create mode 100644 arch/powerpc/kvm/trace_hv.h
>>>
>>> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
>>> b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>>> index 70feb7b..20cbad1 100644
>>> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
>>> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>>> @@ -38,6 +38,7 @@
>>> #include 
>>>
>>> #include "book3s_hv_cma.h"
>>> +#include "trace_hv.h"
>>>
>>> /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */
>>> #define MAX_LPID_97063
>>> @@ -627,6 +628,8 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
>>> struct kvm_vcpu *vcpu,
>>>gfn = gpa >> PAGE_SHIFT;
>>>memslot = gfn_to_memslot(kvm, gfn);
>>>
>>> +trace_kvm_page_fault_enter(vcpu, hpte, memslot, ea, dsisr);
>>> +
>>>/* No memslot means it's an emulated MMIO region */
>>>if (!memslot || (memslot->flags & KVM_MEMSLOT_INVALID))
>>>return kvmppc_hv_emulate_mmio(run, vcpu, gpa, ea,
>>> @@ -639,6 +642,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
>>> struct kvm_vcpu *vcpu,
>>>mmu_seq = kvm->mmu_notifier_seq;
>>>smp_rmb();
>>>
>>> +ret = -EFAULT;
>>>is_io = 0;
>>>pfn = 0;
>>>page = NULL;
>>> @@ -662,7 +666,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
>>> struct kvm_vcpu *vcpu,
>>>}
>>>up_read(¤t->mm->mmap_sem);
>>>if (!pfn)
>>> -return -EFAULT;
>>> +goto out_put;
>>>} else {
>>>page = pages[0];
>>>if (PageHuge(page)) {
>>> @@ -690,14 +694,14 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
>>> struct kvm_vcpu *vcpu,
>>>pfn = page_to_pfn(page);
>>>}
>>>
>>> -ret = -EFAULT;
>>>if (psize > pte_size)
>>>goto out_put;
>>>
>>>/* Check WIMG vs. the actual page we're accessing */
>>>if (!hpte_cache_flags_ok(r, is_io)) {
>>>if (is_io)
>>> -return -EFAULT;
>>> +goto out_put;
>>> +
>>>/*
>>> * Allow guest to map emulated device memory as
>>> * uncacheable, but actually make it cacheable.
>>> @@ -753,6 +757,8 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
>>> struct kvm_vcpu *vcpu,
>>>SetPageDirty(page);
>>>
>>>  out_put:
>>> +trace_kvm_page_fault_exit(vcpu, hpte, ret);
>>> +
>>>if (page) {
>>>/*
>>> * We drop pages[0] here, not page because page might
>>> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
>>> index 69d4085..5143d17 100644
>>> --- a/arch/powerpc/kvm/book3s_hv.c
>>> +++ b/arch/powerpc/kvm/book3s_hv.c
>>> @@ -57,6 +57,9 @@
>>>
>>> #include "book3s.h"
>>>
>>> +#define CREATE_TRACE_POINTS
>>> +#include "trace_hv.h"
>>> +
>>> /* #define EXIT_DEBUG */
>>> /* #define EXIT_DEBUG_SIMPLE */
>>> /* #define EXIT_DEBUG_INT */
>>> @@ -1679,6 +1682,7 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>>>list_for_each_entry(vcpu, &vc->runnable_threads, arch.run_list) {
>>>kvmppc_start_thread(vcpu);
>>>kvmppc_create_dtl_entry(vcpu, vc);
>>> +trace_kvm_guest_enter(vcpu);
>>>}
>>>
>>>/* Set this explicitly in case thread 0 doesn't have a vcpu */
>>> @@ -1687,6 +1691,9 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>>>
>>>vc->vcore_state = VCORE_RUNNING;
>>>preempt_disable();
>>> +
>>> +trace_kvmppc_run_core(vc, 0);
>>> +
>>>spin_unlock(&vc->lock);
>>>
>>>kvm_guest_enter();
>>> @@ -1732,6 +1739,8 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>>>kvmppc_core_pending_dec(vcpu))
>>>kvmppc_core_dequeue_dec(vcpu);
>>>
>>> +trace_kvm_guest_exit(vcpu);
>>> +
>>>ret = RESUME_GUEST;
>>>if (vcpu->arch.trap)
>>>ret = kvmppc_handle_exit_hv(vcpu->arch.kvm_run, vcpu,
>>> @@ -1757,6 +1766,8 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>>>wake_up(&vcpu->arch.cpu_run);
>>>}
>>>}
>>> +
>>> +trace_kvmppc_run_core(vc, 1);
>>> }
>>>
>>> /*
>>> @@ -1783,11 +1794,13 @@ static void kvmppc_vcore_blocked(struct 
>>> kvmppc_vcore *vc)
>>>
>>>prepare_to_wait(&vc->wq, &wait, TASK_INTERRUPTIBLE);
>>>vc->vcore_state = VCORE_SLEEPING;
>>> +trace_kvmppc_vcore_blocked(vc, 0);
>>>spin_unlock(&vc->lock);
>>>schedule();
>>>fini

Re: [PATCH] KVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions

2014-11-20 Thread Aneesh Kumar K.V
"Suresh E. Warrier"  writes:

> This patch adds trace points in the guest entry and exit code and also
> for exceptions handled by the host in kernel mode - hypercalls and page
> faults. The new events are added to /sys/kernel/debug/tracing/events
> under a new subsystem called kvm_hv.



>   /* Set this explicitly in case thread 0 doesn't have a vcpu */
> @@ -1687,6 +1691,9 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>  
>   vc->vcore_state = VCORE_RUNNING;
>   preempt_disable();
> +
> + trace_kvmppc_run_core(vc, 0);
> +
>   spin_unlock(&vc->lock);

Do we really want to call tracepoint with spin lock held ? Is that a good
thing to do ?. 

-aneesh

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions

2014-11-19 Thread Suresh E. Warrier


On 11/14/2014 04:56 AM, Alexander Graf wrote:
> 
> 
> 
>> Am 14.11.2014 um 00:29 schrieb Suresh E. Warrier 
>> :
>>
>> This patch adds trace points in the guest entry and exit code and also
>> for exceptions handled by the host in kernel mode - hypercalls and page
>> faults. The new events are added to /sys/kernel/debug/tracing/events
>> under a new subsystem called kvm_hv.
>>
>> Acked-by: Paul Mackerras 
>> Signed-off-by: Suresh Warrier 
>> ---
>> arch/powerpc/kvm/book3s_64_mmu_hv.c |  12 +-
>> arch/powerpc/kvm/book3s_hv.c|  19 ++
>> arch/powerpc/kvm/trace_hv.h | 497 
>> 
>> 3 files changed, 525 insertions(+), 3 deletions(-)
>> create mode 100644 arch/powerpc/kvm/trace_hv.h
>>
>> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
>> b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> index 70feb7b..20cbad1 100644
>> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> @@ -38,6 +38,7 @@
>> #include 
>>
>> #include "book3s_hv_cma.h"
>> +#include "trace_hv.h"
>>
>> /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */
>> #define MAX_LPID_97063
>> @@ -627,6 +628,8 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
>> struct kvm_vcpu *vcpu,
>>gfn = gpa >> PAGE_SHIFT;
>>memslot = gfn_to_memslot(kvm, gfn);
>>
>> +trace_kvm_page_fault_enter(vcpu, hpte, memslot, ea, dsisr);
>> +
>>/* No memslot means it's an emulated MMIO region */
>>if (!memslot || (memslot->flags & KVM_MEMSLOT_INVALID))
>>return kvmppc_hv_emulate_mmio(run, vcpu, gpa, ea,
>> @@ -639,6 +642,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
>> struct kvm_vcpu *vcpu,
>>mmu_seq = kvm->mmu_notifier_seq;
>>smp_rmb();
>>
>> +ret = -EFAULT;
>>is_io = 0;
>>pfn = 0;
>>page = NULL;
>> @@ -662,7 +666,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
>> struct kvm_vcpu *vcpu,
>>}
>>up_read(¤t->mm->mmap_sem);
>>if (!pfn)
>> -return -EFAULT;
>> +goto out_put;
>>} else {
>>page = pages[0];
>>if (PageHuge(page)) {
>> @@ -690,14 +694,14 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
>> struct kvm_vcpu *vcpu,
>>pfn = page_to_pfn(page);
>>}
>>
>> -ret = -EFAULT;
>>if (psize > pte_size)
>>goto out_put;
>>
>>/* Check WIMG vs. the actual page we're accessing */
>>if (!hpte_cache_flags_ok(r, is_io)) {
>>if (is_io)
>> -return -EFAULT;
>> +goto out_put;
>> +
>>/*
>> * Allow guest to map emulated device memory as
>> * uncacheable, but actually make it cacheable.
>> @@ -753,6 +757,8 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
>> struct kvm_vcpu *vcpu,
>>SetPageDirty(page);
>>
>>  out_put:
>> +trace_kvm_page_fault_exit(vcpu, hpte, ret);
>> +
>>if (page) {
>>/*
>> * We drop pages[0] here, not page because page might
>> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
>> index 69d4085..5143d17 100644
>> --- a/arch/powerpc/kvm/book3s_hv.c
>> +++ b/arch/powerpc/kvm/book3s_hv.c
>> @@ -57,6 +57,9 @@
>>
>> #include "book3s.h"
>>
>> +#define CREATE_TRACE_POINTS
>> +#include "trace_hv.h"
>> +
>> /* #define EXIT_DEBUG */
>> /* #define EXIT_DEBUG_SIMPLE */
>> /* #define EXIT_DEBUG_INT */
>> @@ -1679,6 +1682,7 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>>list_for_each_entry(vcpu, &vc->runnable_threads, arch.run_list) {
>>kvmppc_start_thread(vcpu);
>>kvmppc_create_dtl_entry(vcpu, vc);
>> +trace_kvm_guest_enter(vcpu);
>>}
>>
>>/* Set this explicitly in case thread 0 doesn't have a vcpu */
>> @@ -1687,6 +1691,9 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>>
>>vc->vcore_state = VCORE_RUNNING;
>>preempt_disable();
>> +
>> +trace_kvmppc_run_core(vc, 0);
>> +
>>spin_unlock(&vc->lock);
>>
>>kvm_guest_enter();
>> @@ -1732,6 +1739,8 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>>kvmppc_core_pending_dec(vcpu))
>>kvmppc_core_dequeue_dec(vcpu);
>>
>> +trace_kvm_guest_exit(vcpu);
>> +
>>ret = RESUME_GUEST;
>>if (vcpu->arch.trap)
>>ret = kvmppc_handle_exit_hv(vcpu->arch.kvm_run, vcpu,
>> @@ -1757,6 +1766,8 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>>wake_up(&vcpu->arch.cpu_run);
>>}
>>}
>> +
>> +trace_kvmppc_run_core(vc, 1);
>> }
>>
>> /*
>> @@ -1783,11 +1794,13 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore 
>> *vc)
>>
>>prepare_to_wait(&vc->wq, &wait, TASK_INTERRUPTIBLE);
>>vc->vcore_state = VCORE_SLEEPING;
>> +trace_kvmppc_vcore_blocked(vc, 0);
>>spin_unlock(&vc->lock);
>>schedule();
>>finish_wait(&vc->wq, &wait);
>>spin_lock(&vc->lock);
>>vc->vcore_state = VCORE_INACTIVE;
>> +trace_kvmppc_vcore_blocked(vc, 1);
>> }
>>
>> static int kvmppc_run_vcpu(struct kvm_run *kvm_run, 

Re: [PATCH] KVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions

2014-11-14 Thread Alexander Graf



> Am 14.11.2014 um 00:29 schrieb Suresh E. Warrier :
> 
> This patch adds trace points in the guest entry and exit code and also
> for exceptions handled by the host in kernel mode - hypercalls and page
> faults. The new events are added to /sys/kernel/debug/tracing/events
> under a new subsystem called kvm_hv.
> 
> Acked-by: Paul Mackerras 
> Signed-off-by: Suresh Warrier 
> ---
> arch/powerpc/kvm/book3s_64_mmu_hv.c |  12 +-
> arch/powerpc/kvm/book3s_hv.c|  19 ++
> arch/powerpc/kvm/trace_hv.h | 497 
> 3 files changed, 525 insertions(+), 3 deletions(-)
> create mode 100644 arch/powerpc/kvm/trace_hv.h
> 
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c 
> b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> index 70feb7b..20cbad1 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> @@ -38,6 +38,7 @@
> #include 
> 
> #include "book3s_hv_cma.h"
> +#include "trace_hv.h"
> 
> /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */
> #define MAX_LPID_97063
> @@ -627,6 +628,8 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
> struct kvm_vcpu *vcpu,
>gfn = gpa >> PAGE_SHIFT;
>memslot = gfn_to_memslot(kvm, gfn);
> 
> +trace_kvm_page_fault_enter(vcpu, hpte, memslot, ea, dsisr);
> +
>/* No memslot means it's an emulated MMIO region */
>if (!memslot || (memslot->flags & KVM_MEMSLOT_INVALID))
>return kvmppc_hv_emulate_mmio(run, vcpu, gpa, ea,
> @@ -639,6 +642,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
> struct kvm_vcpu *vcpu,
>mmu_seq = kvm->mmu_notifier_seq;
>smp_rmb();
> 
> +ret = -EFAULT;
>is_io = 0;
>pfn = 0;
>page = NULL;
> @@ -662,7 +666,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
> struct kvm_vcpu *vcpu,
>}
>up_read(¤t->mm->mmap_sem);
>if (!pfn)
> -return -EFAULT;
> +goto out_put;
>} else {
>page = pages[0];
>if (PageHuge(page)) {
> @@ -690,14 +694,14 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
> struct kvm_vcpu *vcpu,
>pfn = page_to_pfn(page);
>}
> 
> -ret = -EFAULT;
>if (psize > pte_size)
>goto out_put;
> 
>/* Check WIMG vs. the actual page we're accessing */
>if (!hpte_cache_flags_ok(r, is_io)) {
>if (is_io)
> -return -EFAULT;
> +goto out_put;
> +
>/*
> * Allow guest to map emulated device memory as
> * uncacheable, but actually make it cacheable.
> @@ -753,6 +757,8 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, 
> struct kvm_vcpu *vcpu,
>SetPageDirty(page);
> 
>  out_put:
> +trace_kvm_page_fault_exit(vcpu, hpte, ret);
> +
>if (page) {
>/*
> * We drop pages[0] here, not page because page might
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 69d4085..5143d17 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -57,6 +57,9 @@
> 
> #include "book3s.h"
> 
> +#define CREATE_TRACE_POINTS
> +#include "trace_hv.h"
> +
> /* #define EXIT_DEBUG */
> /* #define EXIT_DEBUG_SIMPLE */
> /* #define EXIT_DEBUG_INT */
> @@ -1679,6 +1682,7 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>list_for_each_entry(vcpu, &vc->runnable_threads, arch.run_list) {
>kvmppc_start_thread(vcpu);
>kvmppc_create_dtl_entry(vcpu, vc);
> +trace_kvm_guest_enter(vcpu);
>}
> 
>/* Set this explicitly in case thread 0 doesn't have a vcpu */
> @@ -1687,6 +1691,9 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
> 
>vc->vcore_state = VCORE_RUNNING;
>preempt_disable();
> +
> +trace_kvmppc_run_core(vc, 0);
> +
>spin_unlock(&vc->lock);
> 
>kvm_guest_enter();
> @@ -1732,6 +1739,8 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>kvmppc_core_pending_dec(vcpu))
>kvmppc_core_dequeue_dec(vcpu);
> 
> +trace_kvm_guest_exit(vcpu);
> +
>ret = RESUME_GUEST;
>if (vcpu->arch.trap)
>ret = kvmppc_handle_exit_hv(vcpu->arch.kvm_run, vcpu,
> @@ -1757,6 +1766,8 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc)
>wake_up(&vcpu->arch.cpu_run);
>}
>}
> +
> +trace_kvmppc_run_core(vc, 1);
> }
> 
> /*
> @@ -1783,11 +1794,13 @@ static void kvmppc_vcore_blocked(struct kvmppc_vcore 
> *vc)
> 
>prepare_to_wait(&vc->wq, &wait, TASK_INTERRUPTIBLE);
>vc->vcore_state = VCORE_SLEEPING;
> +trace_kvmppc_vcore_blocked(vc, 0);
>spin_unlock(&vc->lock);
>schedule();
>finish_wait(&vc->wq, &wait);
>spin_lock(&vc->lock);
>vc->vcore_state = VCORE_INACTIVE;
> +trace_kvmppc_vcore_blocked(vc, 1);
> }
> 
> static int kvmppc_run_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
> @@ -1796,6 +1809,8 @@ static int kvmppc_run_vcpu(struct kvm_run *kvm_run, 
> struct kvm_vcpu *vcpu)
>struct kvmppc_vcore *vc;
>struct kvm_vcpu *v, *vn;
> 
> +