On Monday 02 April 2018 03:32 PM, Burakov, Anatoly wrote:
> On 02-Apr-18 6:35 AM, santosh wrote:
>>
>> On Sunday 01 April 2018 05:56 PM, Anatoly Burakov wrote:
>>> We already use VA addresses for IOVA purposes everywhere if we're in
>>> RTE_IOVA_VA mode:
>>>   1) rte_malloc_virt2phy()/rte_malloc_virt2iova() always return VA addresses
>>>   2) Because of 1), memzone's IOVA is set to VA address on reserve
>>>   3) Because of 2), mempool's IOVA addresses are set to VA addresses
>>>
>>> The only place where actual physical addresses are stored is in memsegs at
>>> init time, but we're not using them anywhere, and there is no external API
>>> to get those addresses (aside from manually iterating through memsegs), nor
>>> should anyone care about them in RTE_IOVA_VA mode.
>>>
>>> So, fix EAL initialization to allocate VA-contiguous segments at the start
>>> without regard for physical addresses (as if they weren't available), and
>>> use VA to set final IOVA addresses for all pages.
>>>
>>> Fixes: 62196f4e0941 ("mem: rename address mapping function to IOVA")
>>> Cc: tho...@monjalon.net
>>> Cc: sta...@dpdk.org
>>>
>>> Signed-off-by: Anatoly Burakov <anatoly.bura...@intel.com>
>>> ---
>>>   lib/librte_eal/linuxapp/eal/eal_memory.c | 6 +++++-
>>>   1 file changed, 5 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c 
>>> b/lib/librte_eal/linuxapp/eal/eal_memory.c
>>> index 38853b7..ecf375b 100644
>>> --- a/lib/librte_eal/linuxapp/eal/eal_memory.c
>>> +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
>>> @@ -473,6 +473,9 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, 
>>> struct hugepage_info *hpi,
>>>               hugepg_tbl[i].orig_va = virtaddr;
>>>           }
>>>           else {
>>> +            /* rewrite physical addresses in IOVA as VA mode */
>>> +            if (rte_eal_iova_mode() == RTE_IOVA_VA)
>>> +                hugepg_tbl[i].physaddr = (uintptr_t)virtaddr;
>>>               hugepg_tbl[i].final_va = virtaddr;
>>>           }
>>>   @@ -1091,7 +1094,8 @@ rte_eal_hugepage_init(void)
>>>                   continue;
>>>           }
>>>   -        if (phys_addrs_available) {
>>> +        if (phys_addrs_available &&
>>> +                rte_eal_iova_mode() != RTE_IOVA_VA) {
>>
>> Also can be done like below:
>> if (phys_addrs_available)
>>     /* find physical addresses for each hugepage */
>>     if (find_iovaaddrs(&tmp_hp[hp_offset], hpi) < 0) {
>>
>> such that;
>> find_iovaaddrs() --> rte_mem_virt2iova().
>>
>> That way avoid iova check in above if loop.
>> does that make sense?
>> Thanks.
>> [...]
>>
>
> Hi,
>
> That was the initial implementation, however it doesn't work because we do 
> two mappings, original and final, and physical addresses are found during 
> original mappings (meaning, their VA's will be all over the place). We are 
> interested in final VA as IOVA (after all of the sorting and figuring out 
> which segments are contiguous), hence the current implementation.
>
Ok.

Whole series,
Acked-by: Santosh Shukla <santosh.shu...@caviumnetworks.com>

Reply via email to