Currently in record mode the tool implements trace writing serially. The algorithm loops over mapped per-cpu data buffers and stores ready data chunks into a trace file using write() system call.
At some circumstances the kernel may lack free space in a buffer because the other buffer's half is not yet written to disk due to some other buffer's data writing by the tool at the moment. Thus serial trace writing implementation may cause the kernel to loose profiling data and that is what observed when profiling highly parallel CPU bound workloads on machines with big number of cores. Experiment with profiling matrix multiplication code executing 128 threads on Intel Xeon Phi (KNM) with 272 cores, like below, demonstrates data loss metrics value of 98%: /usr/bin/time perf record -o /tmp/perf-ser.data -a -N -B -T -R -g \ --call-graph dwarf,1024 --user-regs=IP,SP,BP \ --switch-events -e cycles,instructions,ref-cycles,software/period=1,name=cs,config=0x3/Duk -- \ matrix.gcc Data loss metrics is the ratio lost_time/elapsed_time where lost_time is the sum of time intervals containing PERF_RECORD_LOST records and elapsed_time is the elapsed application run time under profiling. Applying asynchronous trace streaming thru Posix AIO API (http://man7.org/linux/man-pages/man7/aio.7.html) lowers data loss metrics value providing 2x improvement - lowering 98% loss to almost 0%. --- Alexey Budankov (3): perf util: map data buffer for preserving collected data perf record: enable asynchronous trace writing perf record: extend trace writing to multi AIO tools/perf/builtin-record.c | 166 ++++++++++++++++++++++++++++++++++++++++++-- tools/perf/perf.h | 1 + tools/perf/util/evlist.c | 7 +- tools/perf/util/evlist.h | 3 +- tools/perf/util/mmap.c | 114 ++++++++++++++++++++++++++---- tools/perf/util/mmap.h | 11 ++- 6 files changed, 277 insertions(+), 25 deletions(-) --- Changes in v8: - run the whole thing thru checkpatch.pl and corrected found issues except lines longer than 80 symbols - corrected comments alignment and formatting - moved multi AIO implementation into 3rd patch in the series - implemented explicit cblocks array allocation - split AIO completion check into separate record__aio_complete() - set nr_cblocks default to 1 and max allowed value to 4 Changes in v7: - implemented handling record.aio setting from perfconfig file Changes in v6: - adjusted setting of priorities for cblocks; - handled errno == EAGAIN case from aio_write() return; Changes in v5: - resolved livelock on perf record -e intel_pt// -- dd if=/dev/zero of=/dev/null count=100000 - data loss metrics decreased from 25% to 2x in trialed configuration; - reshaped layout of data structures; - implemented --aio option; - avoided nanosleep() prior calling aio_suspend(); - switched to per-cpu aio multi buffer record__aio_sync(); - record_mmap_read_sync() now does global sync just before switching trace file or collection stop; Changes in v4: - converted mmap()/munmap() to malloc()/free() for mmap->data buffer management - converted void *bf to struct perf_mmap *md in signatures - written comment in perf_mmap__push() just before perf_mmap__get(); - written comment in record__mmap_read_sync() on possible restarting of aio_write() operation and releasing perf_mmap object after all; - added perf_mmap__put() for the cases of failed aio_write(); Changes in v3: - written comments about nanosleep(0.5ms) call prior aio_suspend() to cope with intrusiveness of its implementation in glibc; - written comments about rationale behind coping profiling data into mmap->data buffer; Changes in v2: - converted zalloc() to calloc() for allocation of mmap_aio array, - cleared typo and adjusted fallback branch code;