On 27.09.21 16:34, Jan Beulich wrote:
> On 27.09.2021 14:08, Oleksandr Andrushchenko wrote:
>> On 27.09.21 14:31, Jan Beulich wrote:
>>> On 23.09.2021 14:55, Oleksandr Andrushchenko wrote:
>>>> --- a/xen/drivers/passthrough/pci.c
>>>> +++ b/xen/drivers/passthrough/pci.c
>>>> @@ -890,6 +890,31 @@ int pci_remove_virtual_device(struct domain *d, const 
>>>> struct pci_dev *pdev)
>>>>        return 0;
>>>>    }
>>>>    
>>>> +/*
>>>> + * Find the physical device which is mapped to the virtual device
>>>> + * and translate virtual SBDF to the physical one.
>>>> + */
>>>> +bool pci_translate_virtual_device(struct vcpu *v, pci_sbdf_t *sbdf)
>>> Why struct vcpu, when you only need ...
>>>
>>>> +{
>>>> +    struct domain *d = v->domain;
>>> ... this? It's also not really logical for this function to take a
>>> struct vcpu, as the translation should be uniform within a domain.
>> Agree, struct domain is just enough
>>> Also - const please (as said elsewhere before, ideally wherever possible
>>> and sensible).
>> Ok
>>>> +    struct vpci_dev *vdev;
>>>> +    bool found = false;
>>>> +
>>>> +    pcidevs_lock();
>>>> +    list_for_each_entry ( vdev, &d->vdev_list, list )
>>>> +    {
>>>> +        if ( vdev->sbdf.sbdf == sbdf->sbdf )
>>>> +        {
>>>> +            /* Replace virtual SBDF with the physical one. */
>>>> +            *sbdf = vdev->pdev->sbdf;
>>>> +            found = true;
>>>> +            break;
>>>> +        }
>>>> +    }
>>> For a DomU with just one or at most a couple of devices, such a brute
>>> force lookup may be fine. What about Dom0 though? The physical topology
>>> gets split at the segment level, so maybe this would by a reasonable
>>> granularity here as well?
>> Not sure I am following why topology matters here. We are just trying to
>> match one SBDF (as seen by the guest) to other SBDF (physical,
>> as seen by Dom0), so we can proxy DomU's configuration space access
>> to the proper device in Dom0.
> Topology here matters only in so far as I've suggested to have separate
> lists per segment, to reduce look times. Other methods of avoiding a
> fully linear search are of course possible as well.

Ah, with that that respect then of course. But let's be realistic.

How many PCI devices are normally passed through to a guest?

I can assume this is probably less than 10 most of the time.

By assuming that the number of devices is small I see no profit,

but unneeded complexity in accounting virtual devices per segment

and performing the relevant lookup. So, I would go with a single list

and "brute force lookup" unless it is clearly seen that this needs to be

optimized.

>
>>>> +    pcidevs_unlock();
>>>> +    return found;
>>> Nit: Blank line please ahead of the main "return" of a function.
>> Sure
>>>> +}
>>>> +
>>>>    /* Caller should hold the pcidevs_lock */
>>>>    static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>>>>                               uint8_t devfn)
>>> Seeing this function in context (which patch 2 adds without any #ifdef
>>> around it afaics),
>> I believe you are talking about vpci_deassign_device here
>> vpci_{assign|deassign}_device seem to be not called on x86 PVH as of now,
>> this is true.
>>
>>>    will this new function needlessly be built on x86 as
>>> well?
>> It will at the moment. But in order to avoid ifdefery I would like
>> to still implement it as an empty function for x86.
>>
>>>    (I didn't look at other intermediate patches yet, so please
>>> forgive if I've missed the addition of an #ifdef.)
>> So, I can gate this with HAS_VPCI_GUEST_SUPPORT in patch 2
>> (HAS_VPCI_GUEST_SUPPORT is introduced in patch 4, so I'll move it to 2)
>> Does this sound good?
> Yes.
I'll see how I can get rid of the code that x86 doesn't use
> Jan
>
Thank you,

Oleksandr

Reply via email to