Re: [PATCH v4] IB/sa: Resolving use-after-free in ib_nl_send_msg

2020-07-07 Thread Jason Gunthorpe
On Tue, Jul 07, 2020 at 06:05:02PM -0700, Divya Indi wrote:
> Thanks Jason.
> 
> Appreciate your help and feedback for fixing this issue.
> 
> Would it be possible to access the edited version of the patch?
> If yes, please share a pointer to the same.

https://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma.git/commit/?h=for-rc=f427f4d6214c183c474eeb46212d38e6c7223d6a

Jason


Re: [PATCH v4] IB/sa: Resolving use-after-free in ib_nl_send_msg

2020-07-07 Thread Divya Indi
Thanks Jason.

Appreciate your help and feedback for fixing this issue.

Would it be possible to access the edited version of the patch?
If yes, please share a pointer to the same.

Thanks,
Divya


On 7/2/20 12:07 PM, Jason Gunthorpe wrote:
> On Tue, Jun 23, 2020 at 07:13:09PM -0700, Divya Indi wrote:
>> Commit 3ebd2fd0d011 ("IB/sa: Put netlink request into the request list 
>> before sending")'
>> -
>> 1. Adds the query to the request list before ib_nl_snd_msg.
>> 2. Moves ib_nl_send_msg out of spinlock, hence safe to use gfp_mask as is.
>>
>> However, if there is a delay in sending out the request (For
>> eg: Delay due to low memory situation) the timer to handle request timeout
>> might kick in before the request is sent out to ibacm via netlink.
>> ib_nl_request_timeout may release the query causing a use after free 
>> situation
>> while accessing the query in ib_nl_send_msg.
>>
>> Call Trace for the above race:
>>
>> [] ? ib_pack+0x17b/0x240 [ib_core]
>> [] ib_sa_path_rec_get+0x181/0x200 [ib_sa]
>> [] rdma_resolve_route+0x3c0/0x8d0 [rdma_cm]
>> [] ? cma_bind_port+0xa0/0xa0 [rdma_cm]
>> [] ? rds_rdma_cm_event_handler_cmn+0x850/0x850
>> [rds_rdma]
>> [] rds_rdma_cm_event_handler_cmn+0x22c/0x850
>> [rds_rdma]
>> [] rds_rdma_cm_event_handler+0x10/0x20 [rds_rdma]
>> [] addr_handler+0x9e/0x140 [rdma_cm]
>> [] process_req+0x134/0x190 [ib_addr]
>> [] process_one_work+0x169/0x4a0
>> [] worker_thread+0x5b/0x560
>> [] ? flush_delayed_work+0x50/0x50
>> [] kthread+0xcb/0xf0
>> [] ? __schedule+0x24a/0x810
>> [] ? __schedule+0x24a/0x810
>> [] ? kthread_create_on_node+0x180/0x180
>> [] ret_from_fork+0x47/0x90
>> [] ? kthread_create_on_node+0x180/0x180
>> 
>> RIP  [] send_mad+0x33d/0x5d0 [ib_sa]
>>
>> To resolve the above issue -
>> 1. Add the req to the request list only after the request has been sent out.
>> 2. To handle the race where response comes in before adding request to
>> the request list, send(rdma_nl_multicast) and add to list while holding the
>> spinlock - request_lock.
>> 3. Use non blocking memory allocation flags for rdma_nl_multicast since it is
>> called while holding a spinlock.
>>
>> Fixes: 3ebd2fd0d011 ("IB/sa: Put netlink request into the request list
>> before sending")
>>
>> Signed-off-by: Divya Indi 
>> ---
>> v1:
>> - Use flag IB_SA_NL_QUERY_SENT to prevent the use-after-free.
>>
>> v2:
>> - Use atomic bit ops for setting and testing IB_SA_NL_QUERY_SENT.
>> - Rewording and adding comments.
>>
>> v3:
>> - Change approach and remove usage of IB_SA_NL_QUERY_SENT.
>> - Add req to request list only after the request has been sent out.
>> - Send and add to list while holding the spinlock (request_lock).
>> - Overide gfp_mask and use GFP_NOWAIT for rdma_nl_multicast since we
>>   need non blocking memory allocation while holding spinlock.
>>
>> v4:
>> - Formatting changes.
>> - Use GFP_NOWAIT conditionally - Only when GFP_ATOMIC is not provided by 
>> caller.
>> ---
>>  drivers/infiniband/core/sa_query.c | 41 
>> ++
>>  1 file changed, 24 insertions(+), 17 deletions(-)
> I made a few edits, and applied to for-rc
>
> Thanks,
> Jason


Re: [PATCH v4] IB/sa: Resolving use-after-free in ib_nl_send_msg

2020-07-02 Thread Jason Gunthorpe
On Tue, Jun 23, 2020 at 07:13:09PM -0700, Divya Indi wrote:
> Commit 3ebd2fd0d011 ("IB/sa: Put netlink request into the request list before 
> sending")'
> -
> 1. Adds the query to the request list before ib_nl_snd_msg.
> 2. Moves ib_nl_send_msg out of spinlock, hence safe to use gfp_mask as is.
> 
> However, if there is a delay in sending out the request (For
> eg: Delay due to low memory situation) the timer to handle request timeout
> might kick in before the request is sent out to ibacm via netlink.
> ib_nl_request_timeout may release the query causing a use after free situation
> while accessing the query in ib_nl_send_msg.
> 
> Call Trace for the above race:
> 
> [] ? ib_pack+0x17b/0x240 [ib_core]
> [] ib_sa_path_rec_get+0x181/0x200 [ib_sa]
> [] rdma_resolve_route+0x3c0/0x8d0 [rdma_cm]
> [] ? cma_bind_port+0xa0/0xa0 [rdma_cm]
> [] ? rds_rdma_cm_event_handler_cmn+0x850/0x850
> [rds_rdma]
> [] rds_rdma_cm_event_handler_cmn+0x22c/0x850
> [rds_rdma]
> [] rds_rdma_cm_event_handler+0x10/0x20 [rds_rdma]
> [] addr_handler+0x9e/0x140 [rdma_cm]
> [] process_req+0x134/0x190 [ib_addr]
> [] process_one_work+0x169/0x4a0
> [] worker_thread+0x5b/0x560
> [] ? flush_delayed_work+0x50/0x50
> [] kthread+0xcb/0xf0
> [] ? __schedule+0x24a/0x810
> [] ? __schedule+0x24a/0x810
> [] ? kthread_create_on_node+0x180/0x180
> [] ret_from_fork+0x47/0x90
> [] ? kthread_create_on_node+0x180/0x180
> 
> RIP  [] send_mad+0x33d/0x5d0 [ib_sa]
> 
> To resolve the above issue -
> 1. Add the req to the request list only after the request has been sent out.
> 2. To handle the race where response comes in before adding request to
> the request list, send(rdma_nl_multicast) and add to list while holding the
> spinlock - request_lock.
> 3. Use non blocking memory allocation flags for rdma_nl_multicast since it is
> called while holding a spinlock.
> 
> Fixes: 3ebd2fd0d011 ("IB/sa: Put netlink request into the request list
> before sending")
> 
> Signed-off-by: Divya Indi 
> ---
> v1:
> - Use flag IB_SA_NL_QUERY_SENT to prevent the use-after-free.
> 
> v2:
> - Use atomic bit ops for setting and testing IB_SA_NL_QUERY_SENT.
> - Rewording and adding comments.
> 
> v3:
> - Change approach and remove usage of IB_SA_NL_QUERY_SENT.
> - Add req to request list only after the request has been sent out.
> - Send and add to list while holding the spinlock (request_lock).
> - Overide gfp_mask and use GFP_NOWAIT for rdma_nl_multicast since we
>   need non blocking memory allocation while holding spinlock.
> 
> v4:
> - Formatting changes.
> - Use GFP_NOWAIT conditionally - Only when GFP_ATOMIC is not provided by 
> caller.
> ---
>  drivers/infiniband/core/sa_query.c | 41 
> ++
>  1 file changed, 24 insertions(+), 17 deletions(-)

I made a few edits, and applied to for-rc

Thanks,
Jason


Re: [PATCH v4] IB/sa: Resolving use-after-free in ib_nl_send_msg

2020-06-25 Thread Leon Romanovsky
On Thu, Jun 25, 2020 at 10:11:07AM -0700, Divya Indi wrote:
> Hi Leon,
>
> Please find my comments inline -
>
> On 6/25/20 3:09 AM, Leon Romanovsky wrote:
> > On Tue, Jun 23, 2020 at 07:13:09PM -0700, Divya Indi wrote:
> >> Commit 3ebd2fd0d011 ("IB/sa: Put netlink request into the request list 
> >> before sending")'
> >> -
> >> 1. Adds the query to the request list before ib_nl_snd_msg.
> >> 2. Moves ib_nl_send_msg out of spinlock, hence safe to use gfp_mask as is.
> >>
> >> However, if there is a delay in sending out the request (For
> >> eg: Delay due to low memory situation) the timer to handle request timeout
> >> might kick in before the request is sent out to ibacm via netlink.
> >> ib_nl_request_timeout may release the query causing a use after free 
> >> situation
> >> while accessing the query in ib_nl_send_msg.
> >>
> >> Call Trace for the above race:
> >>
> >> [] ? ib_pack+0x17b/0x240 [ib_core]
> >> [] ib_sa_path_rec_get+0x181/0x200 [ib_sa]
> >> [] rdma_resolve_route+0x3c0/0x8d0 [rdma_cm]
> >> [] ? cma_bind_port+0xa0/0xa0 [rdma_cm]
> >> [] ? rds_rdma_cm_event_handler_cmn+0x850/0x850
> >> [rds_rdma]
> >> [] rds_rdma_cm_event_handler_cmn+0x22c/0x850
> >> [rds_rdma]
> >> [] rds_rdma_cm_event_handler+0x10/0x20 [rds_rdma]
> >> [] addr_handler+0x9e/0x140 [rdma_cm]
> >> [] process_req+0x134/0x190 [ib_addr]
> >> [] process_one_work+0x169/0x4a0
> >> [] worker_thread+0x5b/0x560
> >> [] ? flush_delayed_work+0x50/0x50
> >> [] kthread+0xcb/0xf0
> >> [] ? __schedule+0x24a/0x810
> >> [] ? __schedule+0x24a/0x810
> >> [] ? kthread_create_on_node+0x180/0x180
> >> [] ret_from_fork+0x47/0x90
> >> [] ? kthread_create_on_node+0x180/0x180
> >> 
> >> RIP  [] send_mad+0x33d/0x5d0 [ib_sa]
> >>
> >> To resolve the above issue -
> >> 1. Add the req to the request list only after the request has been sent 
> >> out.
> >> 2. To handle the race where response comes in before adding request to
> >> the request list, send(rdma_nl_multicast) and add to list while holding the
> >> spinlock - request_lock.
> >> 3. Use non blocking memory allocation flags for rdma_nl_multicast since it 
> >> is
> >> called while holding a spinlock.
> >>
> >> Fixes: 3ebd2fd0d011 ("IB/sa: Put netlink request into the request list
> >> before sending")
> >>
> >> Signed-off-by: Divya Indi 
> >> ---
> >> v1:
> >> - Use flag IB_SA_NL_QUERY_SENT to prevent the use-after-free.
> >>
> >> v2:
> >> - Use atomic bit ops for setting and testing IB_SA_NL_QUERY_SENT.
> >> - Rewording and adding comments.
> >>
> >> v3:
> >> - Change approach and remove usage of IB_SA_NL_QUERY_SENT.
> >> - Add req to request list only after the request has been sent out.
> >> - Send and add to list while holding the spinlock (request_lock).
> >> - Overide gfp_mask and use GFP_NOWAIT for rdma_nl_multicast since we
> >>   need non blocking memory allocation while holding spinlock.
> >>
> >> v4:
> >> - Formatting changes.
> >> - Use GFP_NOWAIT conditionally - Only when GFP_ATOMIC is not provided by 
> >> caller.
> >> ---
> >>  drivers/infiniband/core/sa_query.c | 41 
> >> ++
> >>  1 file changed, 24 insertions(+), 17 deletions(-)
> >>
> >> diff --git a/drivers/infiniband/core/sa_query.c 
> >> b/drivers/infiniband/core/sa_query.c
> >> index 74e0058..9066d48 100644
> >> --- a/drivers/infiniband/core/sa_query.c
> >> +++ b/drivers/infiniband/core/sa_query.c
> >> @@ -836,6 +836,10 @@ static int ib_nl_send_msg(struct ib_sa_query *query, 
> >> gfp_t gfp_mask)
> >>void *data;
> >>struct ib_sa_mad *mad;
> >>int len;
> >> +  unsigned long flags;
> >> +  unsigned long delay;
> >> +  gfp_t gfp_flag;
> >> +  int ret;
> >>
> >>mad = query->mad_buf->mad;
> >>len = ib_nl_get_path_rec_attrs_len(mad->sa_hdr.comp_mask);
> >> @@ -860,36 +864,39 @@ static int ib_nl_send_msg(struct ib_sa_query *query, 
> >> gfp_t gfp_mask)
> >>/* Repair the nlmsg header length */
> >>nlmsg_end(skb, nlh);
> >>
> >> -  return rdma_nl_multicast(_net, skb, RDMA_NL_GROUP_LS, gfp_mask);
> >> -}
> >> +  gfp_flag = ((gfp_mask & GFP_ATOMIC) == GFP_ATOMIC) ? GFP_ATOMIC :
> >> +  GFP_NOWAIT;
> > I would say that the better way will be to write something like this:
> > gfp_flag |= GFP_NOWAIT;
>
> You mean gfp_flag = gfp_mask|GFP_NOWAIT? [We dont want to modify the gfp_mask 
> sent by caller]
>
> #define GFP_ATOMIC  (__GFP_HIGH|__GFP_ATOMIC|__GFP_KSWAPD_RECLAIM)
> #define GFP_KERNEL  (__GFP_RECLAIM | __GFP_IO | __GFP_FS)
> #define GFP_NOWAIT  (__GFP_KSWAPD_RECLAIM)
>
> If a caller passes GFP_KERNEL, "gfp_mask|GFP_NOWAIT" will still have 
> __GFP_RECLAIM,
> __GFP_IO and __GFP_FS set which is not suitable for using under spinlock.

Ahh, sorry I completely forgot about spinlock part.

Thanks

>
> Thanks,
> Divya
>
> >
> > Thanks


Re: [PATCH v4] IB/sa: Resolving use-after-free in ib_nl_send_msg

2020-06-25 Thread Divya Indi
Hi Leon,

Please find my comments inline -

On 6/25/20 3:09 AM, Leon Romanovsky wrote:
> On Tue, Jun 23, 2020 at 07:13:09PM -0700, Divya Indi wrote:
>> Commit 3ebd2fd0d011 ("IB/sa: Put netlink request into the request list 
>> before sending")'
>> -
>> 1. Adds the query to the request list before ib_nl_snd_msg.
>> 2. Moves ib_nl_send_msg out of spinlock, hence safe to use gfp_mask as is.
>>
>> However, if there is a delay in sending out the request (For
>> eg: Delay due to low memory situation) the timer to handle request timeout
>> might kick in before the request is sent out to ibacm via netlink.
>> ib_nl_request_timeout may release the query causing a use after free 
>> situation
>> while accessing the query in ib_nl_send_msg.
>>
>> Call Trace for the above race:
>>
>> [] ? ib_pack+0x17b/0x240 [ib_core]
>> [] ib_sa_path_rec_get+0x181/0x200 [ib_sa]
>> [] rdma_resolve_route+0x3c0/0x8d0 [rdma_cm]
>> [] ? cma_bind_port+0xa0/0xa0 [rdma_cm]
>> [] ? rds_rdma_cm_event_handler_cmn+0x850/0x850
>> [rds_rdma]
>> [] rds_rdma_cm_event_handler_cmn+0x22c/0x850
>> [rds_rdma]
>> [] rds_rdma_cm_event_handler+0x10/0x20 [rds_rdma]
>> [] addr_handler+0x9e/0x140 [rdma_cm]
>> [] process_req+0x134/0x190 [ib_addr]
>> [] process_one_work+0x169/0x4a0
>> [] worker_thread+0x5b/0x560
>> [] ? flush_delayed_work+0x50/0x50
>> [] kthread+0xcb/0xf0
>> [] ? __schedule+0x24a/0x810
>> [] ? __schedule+0x24a/0x810
>> [] ? kthread_create_on_node+0x180/0x180
>> [] ret_from_fork+0x47/0x90
>> [] ? kthread_create_on_node+0x180/0x180
>> 
>> RIP  [] send_mad+0x33d/0x5d0 [ib_sa]
>>
>> To resolve the above issue -
>> 1. Add the req to the request list only after the request has been sent out.
>> 2. To handle the race where response comes in before adding request to
>> the request list, send(rdma_nl_multicast) and add to list while holding the
>> spinlock - request_lock.
>> 3. Use non blocking memory allocation flags for rdma_nl_multicast since it is
>> called while holding a spinlock.
>>
>> Fixes: 3ebd2fd0d011 ("IB/sa: Put netlink request into the request list
>> before sending")
>>
>> Signed-off-by: Divya Indi 
>> ---
>> v1:
>> - Use flag IB_SA_NL_QUERY_SENT to prevent the use-after-free.
>>
>> v2:
>> - Use atomic bit ops for setting and testing IB_SA_NL_QUERY_SENT.
>> - Rewording and adding comments.
>>
>> v3:
>> - Change approach and remove usage of IB_SA_NL_QUERY_SENT.
>> - Add req to request list only after the request has been sent out.
>> - Send and add to list while holding the spinlock (request_lock).
>> - Overide gfp_mask and use GFP_NOWAIT for rdma_nl_multicast since we
>>   need non blocking memory allocation while holding spinlock.
>>
>> v4:
>> - Formatting changes.
>> - Use GFP_NOWAIT conditionally - Only when GFP_ATOMIC is not provided by 
>> caller.
>> ---
>>  drivers/infiniband/core/sa_query.c | 41 
>> ++
>>  1 file changed, 24 insertions(+), 17 deletions(-)
>>
>> diff --git a/drivers/infiniband/core/sa_query.c 
>> b/drivers/infiniband/core/sa_query.c
>> index 74e0058..9066d48 100644
>> --- a/drivers/infiniband/core/sa_query.c
>> +++ b/drivers/infiniband/core/sa_query.c
>> @@ -836,6 +836,10 @@ static int ib_nl_send_msg(struct ib_sa_query *query, 
>> gfp_t gfp_mask)
>>  void *data;
>>  struct ib_sa_mad *mad;
>>  int len;
>> +unsigned long flags;
>> +unsigned long delay;
>> +gfp_t gfp_flag;
>> +int ret;
>>
>>  mad = query->mad_buf->mad;
>>  len = ib_nl_get_path_rec_attrs_len(mad->sa_hdr.comp_mask);
>> @@ -860,36 +864,39 @@ static int ib_nl_send_msg(struct ib_sa_query *query, 
>> gfp_t gfp_mask)
>>  /* Repair the nlmsg header length */
>>  nlmsg_end(skb, nlh);
>>
>> -return rdma_nl_multicast(_net, skb, RDMA_NL_GROUP_LS, gfp_mask);
>> -}
>> +gfp_flag = ((gfp_mask & GFP_ATOMIC) == GFP_ATOMIC) ? GFP_ATOMIC :
>> +GFP_NOWAIT;
> I would say that the better way will be to write something like this:
> gfp_flag |= GFP_NOWAIT;

You mean gfp_flag = gfp_mask|GFP_NOWAIT? [We dont want to modify the gfp_mask 
sent by caller]

#define GFP_ATOMIC  (__GFP_HIGH|__GFP_ATOMIC|__GFP_KSWAPD_RECLAIM)
#define GFP_KERNEL  (__GFP_RECLAIM | __GFP_IO | __GFP_FS)
#define GFP_NOWAIT  (__GFP_KSWAPD_RECLAIM)

If a caller passes GFP_KERNEL, "gfp_mask|GFP_NOWAIT" will still have 
__GFP_RECLAIM,
__GFP_IO and __GFP_FS set which is not suitable for using under spinlock.

Thanks,
Divya

>
> Thanks


Re: [PATCH v4] IB/sa: Resolving use-after-free in ib_nl_send_msg

2020-06-25 Thread Leon Romanovsky
On Tue, Jun 23, 2020 at 07:13:09PM -0700, Divya Indi wrote:
> Commit 3ebd2fd0d011 ("IB/sa: Put netlink request into the request list before 
> sending")'
> -
> 1. Adds the query to the request list before ib_nl_snd_msg.
> 2. Moves ib_nl_send_msg out of spinlock, hence safe to use gfp_mask as is.
>
> However, if there is a delay in sending out the request (For
> eg: Delay due to low memory situation) the timer to handle request timeout
> might kick in before the request is sent out to ibacm via netlink.
> ib_nl_request_timeout may release the query causing a use after free situation
> while accessing the query in ib_nl_send_msg.
>
> Call Trace for the above race:
>
> [] ? ib_pack+0x17b/0x240 [ib_core]
> [] ib_sa_path_rec_get+0x181/0x200 [ib_sa]
> [] rdma_resolve_route+0x3c0/0x8d0 [rdma_cm]
> [] ? cma_bind_port+0xa0/0xa0 [rdma_cm]
> [] ? rds_rdma_cm_event_handler_cmn+0x850/0x850
> [rds_rdma]
> [] rds_rdma_cm_event_handler_cmn+0x22c/0x850
> [rds_rdma]
> [] rds_rdma_cm_event_handler+0x10/0x20 [rds_rdma]
> [] addr_handler+0x9e/0x140 [rdma_cm]
> [] process_req+0x134/0x190 [ib_addr]
> [] process_one_work+0x169/0x4a0
> [] worker_thread+0x5b/0x560
> [] ? flush_delayed_work+0x50/0x50
> [] kthread+0xcb/0xf0
> [] ? __schedule+0x24a/0x810
> [] ? __schedule+0x24a/0x810
> [] ? kthread_create_on_node+0x180/0x180
> [] ret_from_fork+0x47/0x90
> [] ? kthread_create_on_node+0x180/0x180
> 
> RIP  [] send_mad+0x33d/0x5d0 [ib_sa]
>
> To resolve the above issue -
> 1. Add the req to the request list only after the request has been sent out.
> 2. To handle the race where response comes in before adding request to
> the request list, send(rdma_nl_multicast) and add to list while holding the
> spinlock - request_lock.
> 3. Use non blocking memory allocation flags for rdma_nl_multicast since it is
> called while holding a spinlock.
>
> Fixes: 3ebd2fd0d011 ("IB/sa: Put netlink request into the request list
> before sending")
>
> Signed-off-by: Divya Indi 
> ---
> v1:
> - Use flag IB_SA_NL_QUERY_SENT to prevent the use-after-free.
>
> v2:
> - Use atomic bit ops for setting and testing IB_SA_NL_QUERY_SENT.
> - Rewording and adding comments.
>
> v3:
> - Change approach and remove usage of IB_SA_NL_QUERY_SENT.
> - Add req to request list only after the request has been sent out.
> - Send and add to list while holding the spinlock (request_lock).
> - Overide gfp_mask and use GFP_NOWAIT for rdma_nl_multicast since we
>   need non blocking memory allocation while holding spinlock.
>
> v4:
> - Formatting changes.
> - Use GFP_NOWAIT conditionally - Only when GFP_ATOMIC is not provided by 
> caller.
> ---
>  drivers/infiniband/core/sa_query.c | 41 
> ++
>  1 file changed, 24 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/infiniband/core/sa_query.c 
> b/drivers/infiniband/core/sa_query.c
> index 74e0058..9066d48 100644
> --- a/drivers/infiniband/core/sa_query.c
> +++ b/drivers/infiniband/core/sa_query.c
> @@ -836,6 +836,10 @@ static int ib_nl_send_msg(struct ib_sa_query *query, 
> gfp_t gfp_mask)
>   void *data;
>   struct ib_sa_mad *mad;
>   int len;
> + unsigned long flags;
> + unsigned long delay;
> + gfp_t gfp_flag;
> + int ret;
>
>   mad = query->mad_buf->mad;
>   len = ib_nl_get_path_rec_attrs_len(mad->sa_hdr.comp_mask);
> @@ -860,36 +864,39 @@ static int ib_nl_send_msg(struct ib_sa_query *query, 
> gfp_t gfp_mask)
>   /* Repair the nlmsg header length */
>   nlmsg_end(skb, nlh);
>
> - return rdma_nl_multicast(_net, skb, RDMA_NL_GROUP_LS, gfp_mask);
> -}
> + gfp_flag = ((gfp_mask & GFP_ATOMIC) == GFP_ATOMIC) ? GFP_ATOMIC :
> + GFP_NOWAIT;

I would say that the better way will be to write something like this:
gfp_flag |= GFP_NOWAIT;

Thanks


[PATCH v4] IB/sa: Resolving use-after-free in ib_nl_send_msg

2020-06-23 Thread Divya Indi
Commit 3ebd2fd0d011 ("IB/sa: Put netlink request into the request list before 
sending")'
-
1. Adds the query to the request list before ib_nl_snd_msg.
2. Moves ib_nl_send_msg out of spinlock, hence safe to use gfp_mask as is.

However, if there is a delay in sending out the request (For
eg: Delay due to low memory situation) the timer to handle request timeout
might kick in before the request is sent out to ibacm via netlink.
ib_nl_request_timeout may release the query causing a use after free situation
while accessing the query in ib_nl_send_msg.

Call Trace for the above race:

[] ? ib_pack+0x17b/0x240 [ib_core]
[] ib_sa_path_rec_get+0x181/0x200 [ib_sa]
[] rdma_resolve_route+0x3c0/0x8d0 [rdma_cm]
[] ? cma_bind_port+0xa0/0xa0 [rdma_cm]
[] ? rds_rdma_cm_event_handler_cmn+0x850/0x850
[rds_rdma]
[] rds_rdma_cm_event_handler_cmn+0x22c/0x850
[rds_rdma]
[] rds_rdma_cm_event_handler+0x10/0x20 [rds_rdma]
[] addr_handler+0x9e/0x140 [rdma_cm]
[] process_req+0x134/0x190 [ib_addr]
[] process_one_work+0x169/0x4a0
[] worker_thread+0x5b/0x560
[] ? flush_delayed_work+0x50/0x50
[] kthread+0xcb/0xf0
[] ? __schedule+0x24a/0x810
[] ? __schedule+0x24a/0x810
[] ? kthread_create_on_node+0x180/0x180
[] ret_from_fork+0x47/0x90
[] ? kthread_create_on_node+0x180/0x180

RIP  [] send_mad+0x33d/0x5d0 [ib_sa]

To resolve the above issue -
1. Add the req to the request list only after the request has been sent out.
2. To handle the race where response comes in before adding request to
the request list, send(rdma_nl_multicast) and add to list while holding the
spinlock - request_lock.
3. Use non blocking memory allocation flags for rdma_nl_multicast since it is
called while holding a spinlock.

Fixes: 3ebd2fd0d011 ("IB/sa: Put netlink request into the request list
before sending")

Signed-off-by: Divya Indi 
---
v1:
- Use flag IB_SA_NL_QUERY_SENT to prevent the use-after-free.

v2:
- Use atomic bit ops for setting and testing IB_SA_NL_QUERY_SENT.
- Rewording and adding comments.

v3:
- Change approach and remove usage of IB_SA_NL_QUERY_SENT.
- Add req to request list only after the request has been sent out.
- Send and add to list while holding the spinlock (request_lock).
- Overide gfp_mask and use GFP_NOWAIT for rdma_nl_multicast since we
  need non blocking memory allocation while holding spinlock.

v4:
- Formatting changes.
- Use GFP_NOWAIT conditionally - Only when GFP_ATOMIC is not provided by caller.
---
 drivers/infiniband/core/sa_query.c | 41 ++
 1 file changed, 24 insertions(+), 17 deletions(-)

diff --git a/drivers/infiniband/core/sa_query.c 
b/drivers/infiniband/core/sa_query.c
index 74e0058..9066d48 100644
--- a/drivers/infiniband/core/sa_query.c
+++ b/drivers/infiniband/core/sa_query.c
@@ -836,6 +836,10 @@ static int ib_nl_send_msg(struct ib_sa_query *query, gfp_t 
gfp_mask)
void *data;
struct ib_sa_mad *mad;
int len;
+   unsigned long flags;
+   unsigned long delay;
+   gfp_t gfp_flag;
+   int ret;
 
mad = query->mad_buf->mad;
len = ib_nl_get_path_rec_attrs_len(mad->sa_hdr.comp_mask);
@@ -860,36 +864,39 @@ static int ib_nl_send_msg(struct ib_sa_query *query, 
gfp_t gfp_mask)
/* Repair the nlmsg header length */
nlmsg_end(skb, nlh);
 
-   return rdma_nl_multicast(_net, skb, RDMA_NL_GROUP_LS, gfp_mask);
-}
+   gfp_flag = ((gfp_mask & GFP_ATOMIC) == GFP_ATOMIC) ? GFP_ATOMIC :
+   GFP_NOWAIT;
 
-static int ib_nl_make_request(struct ib_sa_query *query, gfp_t gfp_mask)
-{
-   unsigned long flags;
-   unsigned long delay;
-   int ret;
+   spin_lock_irqsave(_nl_request_lock, flags);
+   ret =  rdma_nl_multicast(_net, skb, RDMA_NL_GROUP_LS, gfp_flag);
 
-   INIT_LIST_HEAD(>list);
-   query->seq = (u32)atomic_inc_return(_nl_sa_request_seq);
+   if (ret)
+   goto out;
 
-   /* Put the request on the list first.*/
-   spin_lock_irqsave(_nl_request_lock, flags);
+   /* Put the request on the list.*/
delay = msecs_to_jiffies(sa_local_svc_timeout_ms);
query->timeout = delay + jiffies;
list_add_tail(>list, _nl_request_list);
/* Start the timeout if this is the only request */
if (ib_nl_request_list.next == >list)
queue_delayed_work(ib_nl_wq, _nl_timed_work, delay);
+
+out:
spin_unlock_irqrestore(_nl_request_lock, flags);
 
+   return ret;
+}
+
+static int ib_nl_make_request(struct ib_sa_query *query, gfp_t gfp_mask)
+{
+   int ret;
+
+   INIT_LIST_HEAD(>list);
+   query->seq = (u32)atomic_inc_return(_nl_sa_request_seq);
+
ret = ib_nl_send_msg(query, gfp_mask);
-   if (ret) {
+   if (ret)
ret = -EIO;
-   /* Remove the request */
-   spin_lock_irqsave(_nl_request_lock, flags);
-   list_del(>list);
-   spin_unlock_irqrestore(_nl_request_lock, flags);
-   }
 
return ret;
 }
-- 
1.8.3.1