On 27 February 2015 at 03:09, Frederic Konrad <fred.kon...@greensocs.com> wrote: > On 29/01/2015 16:17, Peter Maydell wrote: >> >> On 16 January 2015 at 17:19, <fred.kon...@greensocs.com> wrote: >>> >>> From: KONRAD Frederic <fred.kon...@greensocs.com> >>> >>> This adds a lock to avoid multiple exclusive access at the same time in >>> case of >>> TCG multithread.
>> All the same comments I had on this patch earlier still apply: >> >> * I think adding mutex handling code to all the target-* >> frontends rather than providing facilities in common >> code for them to use is the wrong approach >> * You will fail to unlock the mutex if the ldrex or strex >> takes a data abort >> * This is making no attempt to learn from or unify with >> the existing attempts at handling exclusives in linux-user. >> When we've done this work we should have a single >> mechanism for handling exclusives in a multithreaded >> host environment which is used by both softmmu and useronly >> configs > We decided to implement the whole atomic instruction inside an helper ...which is a different approach which still isn't really addressing any of my remarks in the list above... > but is > that > possible to get the data with eg: cpu_physical_memory_rw instead of the > normal > generated code? cpu_physical_memory_rw would bypass the TLB and so be much slower. Make sure you use the functions which go via the TLB if you do this in a helper (and remember that they will longjmp out on a tlb miss!) > One other thing which looks suspicious it seems there is one pair of > exclusive_addr/exclusive_val per CPU is that normal? Pretty sure we've already discussed how the current ldrex/strex implementation is not architecturally correct. I think this is another of those areas. In general I'd be much happier seeing a proper sketch of your design, what data structures etc you intend to share between CPUs and which are per-CPU, what generic mechanisms you plan to provide to allow targets to implement atomic instructions, etc. It's quite hard to see the whole picture at the moment. -- PMM