On Thu, Aug 2, 2018 at 10:18 PM, Matthias Welwarsky <matth...@welwarsky.de> wrote: > On Donnerstag, 2. August 2018 15:31:56 CEST Antonio Borneo wrote: >> On Thu, Aug 2, 2018 at 8:24 AM, Michael Schwingen <mich...@schwingen.org> > wrote: >> > On Wed, 01 Aug 2018 22:25:18 +0200 >> > >> > Matthias Welwarsky <matth...@welwarsky.de> wrote: >> >> Hi, >> >> >> >> I'm very much wondering if this feature is worth maintaining. Is it >> >> so useful that it's worth the considerable additional complexity for >> >> keeping the caches coherent? Flushing the caches is a very invasive >> >> action by a debugger and will considerably change the context under >> >> debug. If there isn't a compelling argument to keep it I'm much more >> >> in favor of removing it completely instead of trying fixing it. > >> Hi Matthias, >> I was questioning myself about this point too. >> Let me split the topic in: >> - CPU debugging (I mean focusing on the code running on one CPU core), >> - system debugging. >> >> From CPU debugging point of view, we probably do not need it; I don't see >> a use case that requires changing the memory through AHB while debugging >> armv7a. > > Performance for image uploading could be one possible use case, but I have not > been able to demonstrate a significant gain using AHB access vs CPU access. >
I remember a case in which bus access was faster than CPU immediately after reset-halt. But I did not investigate further if it was a matter of clock setup after reset. I would not put it as the key feature. > I vaguely remember one particular, IMHO obscure platform where you actually > have to bang some registers through AHB in order to being able to reset it (or > actually debug it?). > Yes, indeed, direct bus access could be mandatory for some initial setup. >> What Michael report is correct, but AHB access is not required in that >> case either; the current code in OpenOCD uses the CPU to write the SW >> breakpoints, keeps cache in sync and by-passes the MMU access protection >> through dacrfixup command (maybe the command name is questionable, hard to >> guess if you are not aware of it). > > Well, the command is documented in openocd.pdf... But you're right, this > _might_ be a valid use case, for bypassing access protection when setting > software breakpoints. > >> Nevertheless, the direct AHB access is extremely useful for system >> debugging, at least for: >> - inspection of memory/peripheral without halting the cores >> - dump log and/or traces from memory while system is running > > "mem_ap" pseudo target. > >> - in multi-processor environment without HW cache coherency, to debug >> communication and synchronization issues; take a view of the "real" >> content of the shared memory and compare it with the same view got from >> each of the processors > > Here however, fiddling with the caches is counter-productive. Agree! > >> - to debug power management issues, when a core is power-off or its clock >> is frozen > > "mem_ap" > >> - at platform start/boot, to load the memory faster, knowing that MMU and >> cache are not started yet > > Maybe, maybe not, I've tested this on one or two platforms and wasn't able to > demonstrate a noticeable performance gain when using AHB instead of the CPU. > > But anyway: "mem_ap" > >> Please notice that in all these cases the cache coherence between AHB >> access and the cores is not required! It would even change the system >> status preventing some of the activities above. > > Yep. > >> Probably all the direct access through the bus should be moved away from >> the specific CPU target; the bus is a "system" resource, not core >> specific. >> In this context your pending gerrit change [1] seams going in the right >> direction and could be a step in the path to remove AHB from >> target/cortex_a.c >> On this change [1] I was arguing if the approach of a virtual/fake target >> was the right choice for a bus access (but let's go ahead). > > There is currently nothing in the openocd framework that would allow memory > access through something not a "target". You would anyway want to attach gdb > to have a high-level view of e.g. system-wide data structures, which is only > possible with a "target". > I was working at adding memory commands to the dap. I stopped when you published [1]. Agree that a "target" is what we need for gdb access. Today the target created by [1] does not accept gdb connections. It crashes because it misses the field get_gdb_reg_list in struct target_type mem_ap_target. I thought you were not planning a scenario with gdb but only telnet to OpenOCD. >> What I see difficult, at the moment, is to get GDB participating in the use >> cases above. >> GDB does not have a way to specify if the memory access should go through >> the CPU or through the AHB. > > Also, it doesn't particularly play well with any form of "background" > debugging. Maybe, if openocd supported "non-stop" mode, but I'm not sure. > In the documentation of [2] you write: # The @command{step} and @command{stepi} commands can be used to step a specific core # while other cores are free-running or remain halted, depending on the # scheduler-locking mode configured in GDB. I did not verified, but from the sentence above I thought it could be possible to keep the real targets running and halt only the mem_ap for background memory operations. But the background job is for sure possible using a telnet or tcl side-connection to OpenOCD. >> Another of you pending gerrit change [2] aims at presenting OpenOCD >> targets as GDB threads. Today it (seams) targets SMP cores only. >> Do you think this can evolve for supporting generic multi-processing (non >> SMP) in GDB and, eventually, including the memory target in [1] ? >> Such approach would clearly open the door for asymmetric multi-processing >> with GDB. It would also provide an easy access to the AHB bus; select the >> thread corresponding to the memory target, then inspect or load/dump the >> memory. > > If gdb properly supported multiple remote inferiors, it should already be > possible. But I have no access to such a system to actually try this. What do > you have in mind anyway when you say "generic" multi-processing? > Current gdb does not support multiple remote gdbservers, one for each inferior. But it support multi threads and multi process inferiors through the same gdbserver. For multiple processes it need gdbserver (e.g. OpenOCD) to support multiprocess https://sourceware.org/gdb/current/onlinedocs/gdb/General-Query-Packets.html#multiprocess-extensions It could be possible to extend OpenOCD to provide through a single gdb connection: - one process multi-threading for one SMP cluster of cortex-A (by hwthread) - one process for mem_ap - one process for another core (e.g. cortex-M) of the same SoC I'm still not sure about pro&cons of having cortex-A and cortex-M in the same gdb, and if it is better than having two independent gdb connected to the same OpenOCD. > The "hwthread" rtos anyway only makes sense for debugging OS kernels or > single-system images. Agree, and this is why I now think it is better having both processes and threads. The memory map and the SW running on a cortex-M is not the same of what is running on the cortex-A. Also the symbols are independent. > >> I'm looking for weaknesses in this approach; comments are welcome! >> Of course, to use the AHB memory target, all the side operations of >> virt2phys translation or cache flush and invalidate should be controlled >> by the user through OpenOCD commands ("monitor ..." in GDB) only if he/she >> really need it, and under its/her responsibility. > > Last time I looked gdb didn't care a lot about virtual vs physical addresses. > It knows and understands only the virtual address space its inferior lives in. Yes, agree. Antonio > >> >> Antonio >> >> [1] http://openocd.zylin.com/4002 ("target/mem_ap: generic mem-ap target") >> [2] http://openocd.zylin.com/3999 ("rtos/hwthread: add hardware-thread >> pseudo rtos") > > ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ OpenOCD-devel mailing list OpenOCD-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/openocd-devel