> On Jun 26, 2020, at 8:40 AM, Yonghong Song <y...@fb.com> wrote:
> 
> 
> 
> On 6/25/20 5:13 PM, Song Liu wrote:
>> Introduce helper bpf_get_task_stack(), which dumps stack trace of given
>> task. This is different to bpf_get_stack(), which gets stack track of
>> current task. One potential use case of bpf_get_task_stack() is to call
>> it from bpf_iter__task and dump all /proc/<pid>/stack to a seq_file.
>> bpf_get_task_stack() uses stack_trace_save_tsk() instead of
>> get_perf_callchain() for kernel stack. The benefit of this choice is that
>> stack_trace_save_tsk() doesn't require changes in arch/. The downside of
>> using stack_trace_save_tsk() is that stack_trace_save_tsk() dumps the
>> stack trace to unsigned long array. For 32-bit systems, we need to
>> translate it to u64 array.
>> Signed-off-by: Song Liu <songliubrav...@fb.com>
>> 
[...]
>> +++ b/include/uapi/linux/bpf.h
>> @@ -3252,6 +3252,38 @@ union bpf_attr {
>>   *          case of **BPF_CSUM_LEVEL_QUERY**, the current skb->csum_level
>>   *          is returned or the error code -EACCES in case the skb is not
>>   *          subject to CHECKSUM_UNNECESSARY.
>> + *
>> + * int bpf_get_task_stack(struct task_struct *task, void *buf, u32 size, 
>> u64 flags)
> 
> Andrii's recent patch changed the return type to 'long' to align with
> kernel u64 return type for better llvm code generation.
> 
> Please rebase and you will see the new convention.

Will fix. 

> 
>> + *  Description
>> 

[...]

>>  +static struct perf_callchain_entry *
>> +get_callchain_entry_for_task(struct task_struct *task, u32 init_nr)
>> +{
>> +    struct perf_callchain_entry *entry;
>> +    int rctx;
>> +
>> +    entry = get_callchain_entry(&rctx);
>> +
>> +    if (rctx == -1)
>> +            return NULL;
> 
> Is this needed? Should be below !entry enough?

It is needed before Peter's suggestion. After applying Peter's patch, 
this is no longer needed. 

Thanks,
Song


Reply via email to