On Thu, Aug 2, 2018 at 8:24 AM, Michael Schwingen <mich...@schwingen.org> wrote:
> On Wed, 01 Aug 2018 22:25:18 +0200
> Matthias Welwarsky <matth...@welwarsky.de> wrote:
>
>> Hi,
>>
>> I'm very much wondering if this feature is worth maintaining. Is it
>> so useful that it's worth the considerable additional complexity for
>> keeping the caches coherent? Flushing the caches is a very invasive
>> action by a debugger and will considerably change the context under
>> debug. If there isn't a compelling argument to keep it I'm much more
>> in favor of removing it completely instead of trying fixing it.
>
> I do think it is useful and should be kept / fixed. There are
> situations where the available hardware breakpoints are not sufficient
> and a fallback to software breakpoints is required - even if it is an
> invasice operation, it is better than not being able to debug in that
> situation.

Hi Matthias,
I was questioning myself about this point too.
Let me split the topic in:
- CPU debugging (I mean focusing on the code running on one CPU core),
- system debugging.

>From CPU debugging point of view, we probably do not need it; I don't see
a use case that requires changing the memory through AHB while debugging
armv7a.
What Michael report is correct, but AHB access is not required in that
case either; the current code in OpenOCD uses the CPU to write the SW
breakpoints, keeps cache in sync and by-passes the MMU access protection
through dacrfixup command (maybe the command name is questionable, hard to
guess if you are not aware of it).

Nevertheless, the direct AHB access is extremely useful for system
debugging, at least for:
- inspection of memory/peripheral without halting the cores
- dump log and/or traces from memory while system is running
- in multi-processor environment without HW cache coherency, to debug
  communication and synchronization issues; take a view of the "real"
  content of the shared memory and compare it with the same view got from
  each of the processors
- to debug power management issues, when a core is power-off or its clock
  is frozen
- at platform start/boot, to load the memory faster, knowing that MMU and
  cache are not started yet
Please notice that in all these cases the cache coherence between AHB
access and the cores is not required! It would even change the system
status preventing some of the activities above.

Probably all the direct access through the bus should be moved away from
the specific CPU target; the bus is a "system" resource, not core
specific.
In this context your pending gerrit change [1] seams going in the right
direction and could be a step in the path to remove AHB from
target/cortex_a.c
On this change [1] I was arguing if the approach of a virtual/fake target
was the right choice for a bus access (but let's go ahead).

What I see difficult, at the moment, is to get GDB participating in the use
cases above.
GDB does not have a way to specify if the memory access should go through
the CPU or through the AHB.

Another of you pending gerrit change [2] aims at presenting OpenOCD
targets as GDB threads. Today it (seams) targets SMP cores only.
Do you think this can evolve for supporting generic multi-processing (non
SMP) in GDB and, eventually, including the memory target in [1] ?
Such approach would clearly open the door for asymmetric multi-processing
with GDB. It would also provide an easy access to the AHB bus; select the
thread corresponding to the memory target, then inspect or load/dump the
memory.
I'm looking for weaknesses in this approach; comments are welcome!
Of course, to use the AHB memory target, all the side operations of
virt2phys translation or cache flush and invalidate should be controlled
by the user through OpenOCD commands ("monitor ..." in GDB) only if he/she
really need it, and under its/her responsibility.

Antonio

[1] http://openocd.zylin.com/4002 ("target/mem_ap: generic mem-ap target")
[2] http://openocd.zylin.com/3999 ("rtos/hwthread: add hardware-thread
pseudo rtos")

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
OpenOCD-devel mailing list
OpenOCD-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openocd-devel

Reply via email to