I ran my program with trace-cmd as you said. In this case I did not
apply stress to the system.
Output from "trace-cmd extract" is:
CPU 0: 7790693 events lost
CPU 2: 4433 events lost
CPU 3: 4900601 events lost
CPU0 data recorded at offset=0x295000
1449984 bytes in size
CPU1 data recorded at offset=0x3f7000
1331200 bytes in size
CPU2 data recorded at offset=0x53c000
1449984 bytes in size
CPU3 data recorded at offset=0x69e000
1449984 bytes in size
Output from "trace-cmd report -l" can be seen in
<https://drive.google.com/file/d/1AgRPuBXIB1dFFSkQGqV9-yiTo07r6kN3/view?usp=sharing>
(because it is very long)
I'm not sure how to interpret the logs.
Thank you,
Gabriel Dinse
On Mon, Mar 1, 2021 at 10:13 am, song <[email protected]> wrote:
trace-cmd start -e all; your periodic task
trace-cmd stop
trace-cmd extract
trace-cmd report -l
catch ftrace log to see what is going on in the system.
Song
On 2021/3/1 上午6:26, Gabriel Dinse via Xenomai wrote:
Hello,
I'm doing a comparison of interrupt handling on Xenomai using a
raspberry pi 3-b in 2 situations:
1- No stress
2- Running stress-ng on terminal: $ nice -19 stress-ng -c 4
--metrics --timeout 120s&
What I do is to create a periodic task (priority 50) that runs every
500us activating gpio22 for 250us and save the activating time.
Another task, with priority 99 is meant to catch the interrupts in
a while loop and save the time as well for computing the time
differences later. This is done 100000 times and in the end I do
some verifications.
On situation 1 (no stress) I get:
Min: 11823ns
Max: 17761ns
Standard deviation: 258.96ns
Mean: 13103.17ns
On situation 2 (with cpu stress) I get:
Min: 12239ns
Max: 721302ns
Standard deviation: 32976.01ns
Mean: 29112.24ns
I would not say the higher mean is actually a problem (could be),
but the standard deviation is much higher and I don't think this is
the desired result for Xenomai.
Is the first result fine?
What could be causing this?
Any addition information just ask me.
Thank you,
Gabriel Dinse