We ran into an issue recently where a parser we have for perf.data could not handle TRACING_DATA events. It has a default behaviour of skipping event types it does not understand by using the header.size, since that seems to be how perf.data files are formatted.
However, for TRACING_DATA events, header.size is set to sizeof(ev.tracing_data), even though all of the tracing data immediately follows the event. The simple fix would seem to be to set header.size to the size of the tracing data (plus sizeof ev.tracing_data). However, header.size is only 16 bits, and I imagine TRACING_DATA often exceeds UINT16_MAX, which is probably why the (u32) size was added outside the header. So I'm not sure we can fix this for real. But, I do have an idea to mitigate damage: set header.size to zero. This will cause one of a few things to happen to a parser that doesn't understand TRACING_DATA events: - Ideally: It will realize that a zero size is nonsense and bail. - It will try to advance by zero bytes and enter an infinite loop (this happened to our parser after it skipped around some bogus data and happened upon some data that had zero where size would be). I argue this is better than reading garbage. - I can also imagine an implementation that read()s only sizeof(header), then extends the read() by (header.size - sizeof(header)) additional bytes, and this would end up overflowing the unsigned values, and that could also cause bugs, but hopefully by reading to EOF. Any better ideas? A more difficult solution would be to split TRACING_DATA into UINT16_MAX-sized chunks, but would this needlessly break older readers of data produced by newer perf, even if they already know how to read TRACING_DATA correctly? Thanks, d# -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/