On 13.03.2019 17:36, Jiri Olsa wrote:
> On Tue, Mar 12, 2019 at 08:30:13AM +0300, Alexey Budankov wrote:
> 
> SNIP
> 
>> -
>> -    md->prev = head;
>> -    perf_mmap__consume(md);
>> -
>> -    rc = push(to, &md->aio.cblocks[idx], md->aio.data[idx], size0 + size, 
>> *off);
>> -    if (!rc) {
>> -            *off += size0 + size;
>> -    } else {
>> -            /*
>> -             * Decrement md->refcount back if aio write
>> -             * operation failed to start.
>> -             */
>> -            perf_mmap__put(md);
>> -    }
>> -
>> -    return rc;
>> -}
>>  #else /* !HAVE_AIO_SUPPORT */
>>  static int perf_mmap__aio_enabled(struct perf_mmap *map __maybe_unused)
>>  {
>> @@ -566,7 +492,7 @@ int perf_mmap__push(struct perf_mmap *md, void *to,
>>  
>>      rc = perf_mmap__read_init(md);
>>      if (rc < 0)
>> -            return (rc == -EAGAIN) ? 0 : -1;
>> +            return (rc == -EAGAIN) ? 1 : -1;
> 
> hum, should this be in part of this one?
> 
>   perf record: implement -f,--mmap-flush=<threshold> option

It affects the both serial and AIO but currently is only used 
in AIO flow. That is why it is placed here.

~Alexey

> 
> 
> jirka
> 

Reply via email to