On 2012-11-02 01:52, liu ping fan wrote:
> On Fri, Nov 2, 2012 at 2:44 AM, Jan Kiszka <jan.kis...@web.de> wrote:
>> On 2012-11-01 16:45, Avi Kivity wrote:
>>> On 10/29/2012 11:46 AM, liu ping fan wrote:
>>>> On Mon, Oct 29, 2012 at 5:32 PM, Avi Kivity <a...@redhat.com> wrote:
>>>>> On 10/29/2012 01:48 AM, Liu Ping Fan wrote:
>>>>>> For those address spaces which want to be able out of big lock, they
>>>>>> will be protected by their own local.
>>>>>>
>>>>>> Signed-off-by: Liu Ping Fan <pingf...@linux.vnet.ibm.com>
>>>>>> ---
>>>>>>  memory.c |   11 ++++++++++-
>>>>>>  memory.h |    5 ++++-
>>>>>>  2 files changed, 14 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/memory.c b/memory.c
>>>>>> index 2f68d67..ff34aed 100644
>>>>>> --- a/memory.c
>>>>>> +++ b/memory.c
>>>>>> @@ -1532,9 +1532,15 @@ void memory_listener_unregister(MemoryListener 
>>>>>> *listener)
>>>>>>      QTAILQ_REMOVE(&memory_listeners, listener, link);
>>>>>>  }
>>>>>>
>>>>>> -void address_space_init(AddressSpace *as, MemoryRegion *root)
>>>>>> +void address_space_init(AddressSpace *as, MemoryRegion *root, bool lock)
>>>>>
>>>>>
>>>>> Why not always use the lock?  Even if the big lock is taken, it doesn't
>>>>> hurt.  And eventually all address spaces will be fine-grained.
>>>>>
>>>> I had thought only mmio is out of big lock's protection. While others
>>>> address space will take extra expense. So leave them until they are
>>>> ready to be out of big lock.
>>>
>>> The other address spaces are pio (which also needs fine-grained locking)
>>> and the dma address spaces (which are like address_space_memory, except
>>> they are accessed via DMA instead of from the vcpu).
>>
>> The problem is with memory regions that don't do fine-grained locking
>> yet, thus don't provide ref/unref. Then we fall back to taking BQL
>> across dispatch. If the dispatch caller already holds the BQL, we will
>> bail out.
>>
> Yes, these asymmetrice callers are bothering. Currently, I just make
> exceptions for them, and would like to make the biglock recursive.
> But this motivation may make bug not easy to find.
> 
>> As I understand the series, as->lock == NULL means that we will never
>> take any lock during dispatch as the caller is not yet ready for
>> fine-grained locking. This prevents the problem - for PIO at least. But
>> this series should break TCG as it calls into MMIO dispatch from the
>> VCPU while holding the BQL.
>>
> What about add another condition "dispatch_type == DISPATCH_MMIO" to
> tell this situation.

An alternative pattern that we will also use for core services is to
provide an additional entry point, one that indicates that the caller
doesn't hold the BQL. Then we will gradually move things over until the
existing entry point is obsolete.

Jan

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to