Hello all,
I was looking at the CTF parser in the TMF project of Linux Tools. I
have come up with three points where scalability will be an issue. I am
sending 3 emails, each one describing one of the issues so we can
aggregate them more cohesively. First a primer, CTF is a file format
that has traces written in packets, the packets are parts of files in
streams, the streams are files in a directory.

Issue 2: Packet size
CTF can have an packet with an unlimited size in within a stream. We use
a memory map to access the data. We are limited to 2GB as the size of an
individual packet, as far as I know. If a packet is larger this will
make the trace unreadable. I can imagine hardware tracers that have ~3gb
buffers and dump them to a file in a single packet would be affected by
this.

Proposed solution:
I envision fixing them by having a sliding window of the maximum memory
map size. I see a problem if a single event is say 3GB in size, but I
can't see a short term solution for that, would you have any suggestions
on this front?
_______________________________________________
linuxtools-dev mailing list
linuxtools-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/linuxtools-dev

Reply via email to