Good evening, I've made a small bash script that performs a single bit-flip on a target with KGDB. It tells it to stop, makes the bit-slip, and resumes the target's execution. I need to estimate, as accurately as possible, the impact / intrusion that this operation has on the target. In other words, I need to estimate the amount of time in which the target's normal execution is suspended, being it the execution of KGDB code, or the period of time it is "frozen". So it would be the sum of X + Y ms, am I thinking right?
That said, I have three possible approaches on my head: 1 - The first one has to do with the time counter. When I send the "echo g > /proc/sysrq-trigger" command through minicom, the target freezes totally and it enters KGDB, as it is widely known. After making the bit-flip on the target and resuming execution with the "cont" command on GDB, the system clock is correctly updated, so there must be a time counter somewhere that starts when the system freezes and enters KGDB. I've been looking at the KGDB source code, but still haven't found out where this counter exactly is. Does anyone know more on this? 2 - Another possible solution would be using debugging messages, with timestamps associated. This is still only an hypothesis, but I think that after using the "set debug timestamp" option to turn on the display of timestamps with GDB debugging info, in conjuction with some additional commands to enable optional debugging messages (I'm thinking of "set debug remote 1" for instance, since it shows the packets going back and forth across the serial line to the target), maybe it would be possible to know exactly when the system stopped (and resumed). If I can identify which are the debugging messages that mark the precise moment when the target stops (and resumes) execution, than I only have to look at the seconds and microseconds associated to that message, and do that simple subtraction. If this is a feasible approach, it would be the preferred one, since it would be quicker to reach, and wouldn't be necessary to change code. 3 - Finally, it is always possible to change code and get the target's system time right before it's execution is stopped, and right after it is resumed. After looking at the source code, I understand that the function "kgdb_breakpoint()" is always called by the KGDB core and acts as some kind of "reference" each time the debugger is launched. However, I don't know if this is the first code run when entering KGDB, i. e., if getting the system time inside this function would be accurate for my needs. Do you know in which parts of the code can be considered that the target started its "frozen state" (and on the other hand, where in the code we can consider that the execution is resumed)? So that my measurements are as accurate as possible. 4 - If you're thinking of other solutions, and / or it seems that some of the three approaches I mentioned above doesn't make sense, I would like to know your opinion on that. Best regards and thanks for reading, João ------------------------------------------------------------------------------ Comprehensive Server Monitoring with Site24x7. Monitor 10 servers for $9/Month. Get alerted through email, SMS, voice calls or mobile push notifications. Take corrective actions from your mobile device. http://pubads.g.doubleclick.net/gampad/clk?id=154624111&iu=/4140/ostg.clktrk _______________________________________________ Kgdb-bugreport mailing list Kgdb-bugreport@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kgdb-bugreport