Aaron Lindsay <aa...@os.amperecomputing.com> writes:
> On Dec 08 12:17, Alex Bennée wrote: >> Aaron Lindsay <aa...@os.amperecomputing.com> writes: >> >> > I'm trying to migrate to using the new plugin interface. I see the >> > following in include/qemu/qemu-plugin.h: >> > >> >> enum qemu_plugin_cb_flags { >> >> QEMU_PLUGIN_CB_NO_REGS, /* callback does not access the CPU's regs */ >> >> QEMU_PLUGIN_CB_R_REGS, /* callback reads the CPU's regs */ >> >> QEMU_PLUGIN_CB_RW_REGS, /* callback reads and writes the CPU's regs */ >> >> }; >> > >> > But I don't see a way to access registers in callbacks. Am I missing >> > something? >> >> No - while those symbols do inform the TCG to not try and optimise >> the register file we don't yet have an API for the plugins for reading >> (or writing) the CPU registers. >> >> There has been discussion about this before, I'll quote what I said >> off-list to someone else who asked: >> >> > Has there been any clarification or softening of the position that >> > exposing register and memory contents to the QEMU plugin would provide a >> > way to circumvent the GPL of QEMU? >> >> I don't think implementing read only access would be a problem and >> should probably be a first step anyway. > > That seems reasonable to me. For the time being, at least, I am most > interested in read-only access. > >> For registers I think there needs to be some re-factoring of QEMU's >> internals to do it cleanly. Currently we have each front-end providing >> hooks to the gdbstub as well as building up their own regid and xml to >> be consumed by it. We probably want a architectural neutral central >> repository that the front ends can register their registers (sic) and >> helpers with. This would then provide hooks for gdbstub to cleanly >> generate XML as well as an interface point for the plugin infrastructure >> (and probably whatever the HMP uses as well). > > In a previous incarnation, I was proxying calls to the plugin API > directly through to gdb_read_register() in gdbstub.c and therefore using > gdb as the point of commonality. I'm not saying it's ideal but... it > works? One downside is that you have to know 'out-of-band' which integer > value corresponds to the register you want to query for your > architecture, though it hasn't been a significant issue for me. Certainly workable for a private branch but I don't want to merge anything like that upstream. As far as I can see there are a number of consumers of register information: - plugins - gdbstub - monitor (info registers) - -d LOG_CPU logging so rather than have them all have their hooks into every front-end I can see a case for consolidation. For the plugin case providing an introspection helper to get a handle on the register makes sense and would be less painful than teachning plugins about gdb regids which can and do move around with new system registers. qemu_plugin_reg_t *handle = qemu_plugin_find_register("x2") if we document the handle as usable across calls this can be done on start-up. Then it would be: uint64_t val = qemu_plugin_read_register(cpu_index, handle); >> Memory is a little trickier because you can't know at any point if a >> given virtual address is actually mapped to real memory. The safest way >> would be to extend the existing memory tracking code to save the values >> saved/loaded from a given address. However if you had register access >> you could probably achieve the same thing after the fact by examining >> the opcode and pulling the values from the registers. > > What if memory reads were requested by `qemu_plugin_hwaddr` instead of > by virtual address? `qemu_plugin_get_hwaddr()` is already exposed, and I > would expect being able to successfully get a `qemu_plugin_hwaddr` in a > callback would mean it is currently mapped. Am I overlooking > something? We can't re-run the transaction - there may have been a change to the memory layout that instruction caused (see tlb_plugin_lookup and the interaction with io_writex). However I think we can expand the options for memory instrumentation to cache the read or written value. > I think I might actually prefer a plugin memory access interface be in > the physical address space - it seems like it might allow you to get > more mileage out of one interface without having to support accesses by > virtual and physical address separately. > > Or, even if that won't work for whatever reason, it seems reasonable for > a plugin call accessing memory by virtual address to fail in the case > where it's not mapped. As long as that failure case is well-documented > and easy to distinguish from others within a plugin, why not? Hmmm I'm not sure - I don't want to expose internal implementation details to the plugins because we don't want plugins to rely on them. > > -Aaron -- Alex Bennée