On 05/05/16 18:25, Sergey Fedorov wrote:
> On 05/05/16 18:03, Alex Bennée wrote:
>> Sergey Fedorov <serge.f...@gmail.com> writes:
>>
>>> On 05/04/16 18:32, Alex Bennée wrote:
>>>> diff --git a/exec.c b/exec.c
>>>> index f46e596..17f390e 100644
>>>> --- a/exec.c
>>>> +++ b/exec.c
>>>> @@ -826,6 +826,7 @@ int cpu_breakpoint_insert(CPUState *cpu, vaddr pc, int 
>>>> flags,
>>>>  {
>>>>      CPUBreakpoint *bp;
>>>>
>>>> +    /* TODO: locking (RCU?) */
>>>>      bp = g_malloc(sizeof(*bp));
>>>>
>>>>      bp->pc = pc;
>>> This comment is a little inconsistent. We should make access to
>>> breakpoint and watchpoint lists to be thread-safe in all the functions
>>> using them. So if we note this, it should be noted in all such places.
>>> Also, it's probably not a good idea to put such comment just above
>>> g_malloc() invocation, it could be a bit confusing. A bit more details
>>> would also be nice.
>> Good point.
>>
>> I could really do with some tests to exercise the debugging interface. I
>> did some when I wrote the arm kvm GDB stuff (see
>> 261f4d6d3e5445f887e070f047968e756c30cf06) but it is a) not plumbed in
>> and b) not really a stress test which is what you want to be sure your
>> handling is thread safe.
> It seems we can only have a race between TCG helper inserting/removing
> break-/watchpoint and gdbstub. So maybe we could just use separate lists
> for CPU and GDB breakpoints?

However, we would still need to synchronize reads in VCPU threads with
insertions/removals in gdbstub... No much point in splitting lists, we
could just use a spinlock to serialize insertions/removals and RCU to
make reads safe.

Kind regards,
Sergey

Reply via email to