[AMD Official Use Only - General]

Will post updated patch. My responses inline below

Regards,
Ramesh

-----Original Message-----
From: Kuehling, Felix <felix.kuehl...@amd.com> 
Sent: Wednesday, June 29, 2022 8:58 AM
To: Errabolu, Ramesh <ramesh.errab...@amd.com>; amd-gfx@lists.freedesktop.org; 
dan.carpen...@oracle.com
Subject: Re: [PATCH] drm/amdkfd: Fix warnings from static analyzer Smatch

Am 2022-06-28 um 20:03 schrieb Errabolu, Ramesh:
> [AMD Official Use Only - General]
>
> My responses are inline
>
> -----Original Message-----
> From: Kuehling, Felix <felix.kuehl...@amd.com>
> Sent: Tuesday, June 28, 2022 6:41 PM
> To: Errabolu, Ramesh <ramesh.errab...@amd.com>; 
> amd-gfx@lists.freedesktop.org; dan.carpen...@oracle.com
> Subject: Re: [PATCH] drm/amdkfd: Fix warnings from static analyzer 
> Smatch
>
>
> Am 2022-06-28 um 19:25 schrieb Ramesh Errabolu:
>> The patch fixes couple of warnings, as reported by Smatch a static 
>> analyzer
>>
>> Signed-off-by: Ramesh Errabolu <ramesh.errab...@amd.com>
>> Reported-by: Dan Carpenter <dan.carpen...@oracle.com>
>> ---
>>    drivers/gpu/drm/amd/amdkfd/kfd_topology.c | 36 ++++++++++++-----------
>>    1 file changed, 19 insertions(+), 17 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
>> b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
>> index 25990bec600d..9d7b9ad70bc8 100644
>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
>> @@ -1417,15 +1417,17 @@ static int
>> kfd_create_indirect_link_prop(struct kfd_topology_device *kdev, int g
>>    
>>              /* find CPU <-->  CPU links */
>>              cpu_dev = kfd_topology_device_by_proximity_domain(i);
>> -            if (cpu_dev) {
>> -                    list_for_each_entry(cpu_link,
>> -                                    &cpu_dev->io_link_props, list) {
>> -                            if (cpu_link->node_to == gpu_link->node_to)
>> -                                    break;
>> -                    }
>> -            }
>> +            if (!cpu_dev)
>> +                    continue;
>> +
>> +            cpu_link = NULL;
> This initialization is unnecessary. list_for_each_entry will always 
> initialize it.
>
>
>> +            list_for_each_entry(cpu_link, &cpu_dev->io_link_props, list)
>> +                    if (cpu_link->node_to == gpu_link->node_to)
>> +                            break;
>>    
>> -            if (cpu_link->node_to != gpu_link->node_to)
>> +            /* Ensures we didn't exit from list search with no hits */
>> +            if (list_entry_is_head(cpu_link, &cpu_dev->io_link_props, list) 
>> ||
>> +                    (cpu_link->node_to != gpu_link->node_to))
> The second condition is redundant. If the list entry is not the head, 
> the node_to must have already matched in the loop.
>
> Ramesh: Syntactically, it is possible to walk down the list without having 
> the hit. The check list_entry_is_head() is for that scenario.

If you traverse the whole list without a hit, list_entry_is_head will be true. 
That is also the only case where cpu_link->node_to != gpu_link->node_to is 
possible. Therefore that condition is redundant. 
Just checking list_entry_is_head is sufficient.

Ramesh: You are right, check for list head is sufficient as it implies all 
cpu_link objects do not pass the test of (cpu_link->node_to == 
gpu_link->node_to).

That said, as I pointed out below, you're still using cpu_link outside the 
loop. Therefore it's likely the static checker is still going to complain.

Ramesh: I suspect the reason static analyzer was complaining about cpu_link is 
because it was being used in the loop conditionally. The new code uses cpu_link 
in the loop unconditionally. Per this logic it must be valid once we cross the 
list_entry_is head statement.

Regards,
   Felix


>
> But I'm no sure this solution is going to satisfy the static checker. 
> It objects to using the iterator (cpu_link) outside the loop. I think 
> a proper solution, that doesn't make any assumptions about how 
> list_for_each_entry is implemented, would be to declare a separate 
> variable as the iterator, and assign cpu_link in the loop only if 
> there is a match.
>
> Ramesh: Will wait for a response from Dan.
>
> Regards,
>     Felix
>
>
>>                      return -ENOMEM;
>>    
>>              /* CPU <--> CPU <--> GPU, GPU node*/ @@ -1510,16 +1512,16 @@ 
>> static int kfd_add_peer_prop(struct kfd_topology_device *kdev,
>>              cpu_dev = 
>> kfd_topology_device_by_proximity_domain(iolink1->node_to);
>>              if (cpu_dev) {
>>                      list_for_each_entry(iolink3, &cpu_dev->io_link_props, 
>> list)
>> -                            if (iolink3->node_to == iolink2->node_to)
>> +                            if (iolink3->node_to == iolink2->node_to) {
>> +                                    props->weight += iolink3->weight;
>> +                                    props->min_latency += 
>> iolink3->min_latency;
>> +                                    props->max_latency += 
>> iolink3->max_latency;
>> +                                    props->min_bandwidth = 
>> min(props->min_bandwidth,
>> +                                                                    
>> iolink3->min_bandwidth);
>> +                                    props->max_bandwidth = 
>> min(props->max_bandwidth,
>> +                                                                    
>> iolink3->max_bandwidth);
>>                                      break;
>> -
>> -                    props->weight += iolink3->weight;
>> -                    props->min_latency += iolink3->min_latency;
>> -                    props->max_latency += iolink3->max_latency;
>> -                    props->min_bandwidth = min(props->min_bandwidth,
>> -                                                    iolink3->min_bandwidth);
>> -                    props->max_bandwidth = min(props->max_bandwidth,
>> -                                                    iolink3->max_bandwidth);
>> +                            }
>>              } else {
>>                      WARN(1, "CPU node not found");
>>              }

Reply via email to