[no subject]

2016-04-06 Thread Bastien Philbert


Re: [PATCH] csiostor: Fix backwards locking in the function __csio_unreg_rnode

2016-04-06 Thread Bastien Philbert


On 2016-04-06 01:28 PM, James Bottomley wrote:
> On Wed, 2016-04-06 at 13:23 -0400, Bastien Philbert wrote:
>>
>> On 2016-04-06 01:14 PM, James Bottomley wrote:
>>> On Wed, 2016-04-06 at 10:36 -0400, Bastien Philbert wrote:
>>>>
>>>> On 2016-04-06 10:24 AM, James Bottomley wrote:
>>>>> On Wed, 2016-04-06 at 10:11 -0400, Bastien Philbert wrote:
>>>>>>
>>>>>> On 2016-04-06 09:38 AM, James Bottomley wrote:
>>>>>>> On Wed, 2016-04-06 at 09:21 -0400, Bastien Philbert wrote:
>>>>>>>>
>>>>>>>> On 2016-04-06 03:48 AM, Julian Calaby wrote:
>>>>>>>>> Hi Bastien,
>>>>>>>>>
>>>>>>>>> On Wed, Apr 6, 2016 at 7:19 AM, Bastien Philbert
>>>>>>>>>  wrote:
>>>>>>>>>> This fixes backwards locking in the function
>>>>>>>>>> __csio_unreg_rnode
>>>>>>>>>> to
>>>>>>>>>> properly lock before the call to the function
>>>>>>>>>> csio_unreg_rnode
>>>>>>>>>> and
>>>>>>>>>> not unlock with spin_unlock_irq as this would not
>>>>>>>>>> allow
>>>>>>>>>> the
>>>>>>>>>> proper
>>>>>>>>>> protection for concurrent access on the shared
>>>>>>>>>> csio_hw
>>>>>>>>>> structure
>>>>>>>>>> pointer hw. In addition switch the locking after the
>>>>>>>>>> critical
>>>>>>>>>> region
>>>>>>>>>> function call to properly unlock instead with
>>>>>>>>>> spin_unlock_irq
>>>>>>>>>> on
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Bastien Philbert <
>>>>>>>>>> bastienphilb...@gmail.com>
>>>>>>>>>> ---
>>>>>>>>>>  drivers/scsi/csiostor/csio_rnode.c | 4 ++--
>>>>>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>>>>
>>>>>>>>>> diff --git a/drivers/scsi/csiostor/csio_rnode.c
>>>>>>>>>> b/drivers/scsi/csiostor/csio_rnode.c
>>>>>>>>>> index e9c3b04..029a09e 100644
>>>>>>>>>> --- a/drivers/scsi/csiostor/csio_rnode.c
>>>>>>>>>> +++ b/drivers/scsi/csiostor/csio_rnode.c
>>>>>>>>>> @@ -580,9 +580,9 @@ __csio_unreg_rnode(struct
>>>>>>>>>> csio_rnode
>>>>>>>>>> *rn)
>>>>>>>>>> ln->last_scan_ntgts--;
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> -   spin_unlock_irq(&hw->lock);
>>>>>>>>>> -   csio_unreg_rnode(rn);
>>>>>>>>>> spin_lock_irq(&hw->lock);
>>>>>>>>>> +   csio_unreg_rnode(rn);
>>>>>>>>>> +   spin_unlock_irq(&hw->lock);
>>>>>>>>>
>>>>>>>>> Are you _certain_ this is correct? This construct
>>>>>>>>> usually
>>>>>>>>> appears
>>>>>>>>> when
>>>>>>>>> a function has a particular lock held, then needs to
>>>>>>>>> unlock
>>>>>>>>> it
>>>>>>>>> to
>>>>>>>>> call
>>>>>>>>> some other function. Are you _certain_ that this isn't
>>>>>>>>> the
>>>>>>>>> case?
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>>
>>>>>>>> Yes I am pretty certain this is correct. I checked the
>>>>>>>> paths
>>>>>>>> that
>>>>>>>> called this function
>>>>>>>> and it was weired that none of them gradded the spinlock
>>>>>>>> before
>>>>>>>> hand.
>>>>>>>
>>>>>>> That's not good enough.  

Re: [PATCH] csiostor: Fix backwards locking in the function __csio_unreg_rnode

2016-04-06 Thread Bastien Philbert


On 2016-04-06 01:14 PM, James Bottomley wrote:
> On Wed, 2016-04-06 at 10:36 -0400, Bastien Philbert wrote:
>>
>> On 2016-04-06 10:24 AM, James Bottomley wrote:
>>> On Wed, 2016-04-06 at 10:11 -0400, Bastien Philbert wrote:
>>>>
>>>> On 2016-04-06 09:38 AM, James Bottomley wrote:
>>>>> On Wed, 2016-04-06 at 09:21 -0400, Bastien Philbert wrote:
>>>>>>
>>>>>> On 2016-04-06 03:48 AM, Julian Calaby wrote:
>>>>>>> Hi Bastien,
>>>>>>>
>>>>>>> On Wed, Apr 6, 2016 at 7:19 AM, Bastien Philbert
>>>>>>>  wrote:
>>>>>>>> This fixes backwards locking in the function
>>>>>>>> __csio_unreg_rnode
>>>>>>>> to
>>>>>>>> properly lock before the call to the function
>>>>>>>> csio_unreg_rnode
>>>>>>>> and
>>>>>>>> not unlock with spin_unlock_irq as this would not allow
>>>>>>>> the
>>>>>>>> proper
>>>>>>>> protection for concurrent access on the shared csio_hw
>>>>>>>> structure
>>>>>>>> pointer hw. In addition switch the locking after the
>>>>>>>> critical
>>>>>>>> region
>>>>>>>> function call to properly unlock instead with
>>>>>>>> spin_unlock_irq
>>>>>>>> on
>>>>>>>>
>>>>>>>> Signed-off-by: Bastien Philbert <
>>>>>>>> bastienphilb...@gmail.com>
>>>>>>>> ---
>>>>>>>>  drivers/scsi/csiostor/csio_rnode.c | 4 ++--
>>>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/scsi/csiostor/csio_rnode.c
>>>>>>>> b/drivers/scsi/csiostor/csio_rnode.c
>>>>>>>> index e9c3b04..029a09e 100644
>>>>>>>> --- a/drivers/scsi/csiostor/csio_rnode.c
>>>>>>>> +++ b/drivers/scsi/csiostor/csio_rnode.c
>>>>>>>> @@ -580,9 +580,9 @@ __csio_unreg_rnode(struct csio_rnode
>>>>>>>> *rn)
>>>>>>>> ln->last_scan_ntgts--;
>>>>>>>> }
>>>>>>>>
>>>>>>>> -   spin_unlock_irq(&hw->lock);
>>>>>>>> -   csio_unreg_rnode(rn);
>>>>>>>> spin_lock_irq(&hw->lock);
>>>>>>>> +   csio_unreg_rnode(rn);
>>>>>>>> +   spin_unlock_irq(&hw->lock);
>>>>>>>
>>>>>>> Are you _certain_ this is correct? This construct usually
>>>>>>> appears
>>>>>>> when
>>>>>>> a function has a particular lock held, then needs to unlock
>>>>>>> it
>>>>>>> to
>>>>>>> call
>>>>>>> some other function. Are you _certain_ that this isn't the
>>>>>>> case?
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>> Yes I am pretty certain this is correct. I checked the paths
>>>>>> that
>>>>>> called this function
>>>>>> and it was weired that none of them gradded the spinlock
>>>>>> before
>>>>>> hand.
>>>>>
>>>>> That's not good enough.  If your theory is correct, lockdep
>>>>> should
>>>>> be
>>>>> dropping an already unlocked assertion in this codepath ... do
>>>>> you
>>>>> see
>>>>> this?
>>>>>
>>>>> James
>>>>>
>>>>>
>>>> Yes I do.
>>>
>>> You mean you don't see the lockdep assert, since you're asking to 
>>> drop the patch?
>>>
>>>>  For now just drop the patch but I am still concerned that we are
>>>> double unlocking here.
>>>
>>> Really, no.  The pattern in the code indicates the lock is expected 
>>> to be held.  This can be wrong (sometimes code moves or people
>>> forget), but if it is wrong we'll get an assert about unlock of an 
>>> already unlocked lock.  If there's no assert, the lock is held on 
>>> entry and the code is correct.
>>>
>>> You're proposing patches based on misunderstandings of the code 
>>> which aren't backed up by actual issues and wasting everyone's time 
>>> to look at them.  Please begin with the hard evidence of a problem 
>>> first, so post the lockdep assert in the changelog so we know 
>>> there's a real problem.
>>>
>>> James
>>>
>> Certainly James. I think I just got carried away with the last few 
>> patches :(.
> 
> Is this Nick Krause?  An email reply that Martin forwarded but the list
> didn't pick up (because it had a html part) suggests this.  What you're
> doing is what got you banned from LKML the last time: sending patches
> without evidence there's a problem or understanding the code you're
> patching.  Repeating the behaviour under a new identity isn't going to
> help improve your standing.
> 
> James
> 
No I am not Nick Krause. I am just aware of how he got banned a few years ago.
That email was a mistake by typo and was hoping nobody picked it up as they 
would then believe I was Nick Krause.
Bastien


Re: [PATCH] btrfs:Change BUG_ON to new error path in __clear_extent_bit

2016-04-06 Thread Bastien Philbert


On 2016-04-06 12:10 PM, Filipe Manana wrote:
> On Wed, Apr 6, 2016 at 4:56 PM, Bastien Philbert
>  wrote:
>> This remove the unnessary BUG_ON if the allocation with
>> alloc_extent_state_atomic fails due to this function
>> failure not being unrecoverable. Instead we now change
>> this BUG_ON into a new error path that jumps to the goto
>> label, out from freeing previously allocated resources
>> before returning the error code -ENOMEM to signal callers
>> that the call to __clear_extent_bit failed due to a memory
>> allocation failure.
>>
>> Signed-off-by: Bastien Philbert 
>> ---
>>  fs/btrfs/extent_io.c | 5 -
>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
>> index d247fc0..4c87b77 100644
>> --- a/fs/btrfs/extent_io.c
>> +++ b/fs/btrfs/extent_io.c
>> @@ -682,7 +682,10 @@ hit_next:
>>
>> if (state->start < start) {
>> prealloc = alloc_extent_state_atomic(prealloc);
>> -   BUG_ON(!prealloc);
>> +   if (!prealloc) {
>> +   err = -ENOMEM;
>> +   goto out;
>> +   }
> 
> Why only in this particular place? We do this in other places in this 
> function.
> 
> Second, setting err to -ENOMEM is not enough. Under the out label we
> always return 0, so you're not propagating the error to callers.
> 
> Now, most importantly, you didn't check if callers handle errors from
> this function (__clear_extent_bit()) at all. A failure in this
> function is critical.
> For example, it can cause a range in an inode's io tree to become
> locked forever, blocking any other tasks that want to operate on the
> range, and we won't ever know what happened.
> So it's far from trivial to handle errors from this function and
> that's why the BUG_ON is there.
> 
> If you really want to get rid of the BUG_ON() calls you need to make
> sure all callers don't ignore the errors and that they deal with them
> properly.
Sorry, I feel really stupid about missing those other callers. :(
Bastien
> 
> 
>> err = split_state(tree, state, prealloc, start);
>> if (err)
>> extent_io_tree_panic(tree, err);
>> --
>> 2.5.0
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> 


[PATCH] btrfs:Change BUG_ON to new error path in __clear_extent_bit

2016-04-06 Thread Bastien Philbert
This remove the unnessary BUG_ON if the allocation with
alloc_extent_state_atomic fails due to this function
failure not being unrecoverable. Instead we now change
this BUG_ON into a new error path that jumps to the goto
label, out from freeing previously allocated resources
before returning the error code -ENOMEM to signal callers
that the call to __clear_extent_bit failed due to a memory
allocation failure.

Signed-off-by: Bastien Philbert 
---
 fs/btrfs/extent_io.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index d247fc0..4c87b77 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -682,7 +682,10 @@ hit_next:
 
if (state->start < start) {
prealloc = alloc_extent_state_atomic(prealloc);
-   BUG_ON(!prealloc);
+   if (!prealloc) {
+   err = -ENOMEM;
+   goto out;
+   }
err = split_state(tree, state, prealloc, start);
if (err)
extent_io_tree_panic(tree, err);
-- 
2.5.0



Re: [PATCH] csiostor: Fix backwards locking in the function __csio_unreg_rnode

2016-04-06 Thread Bastien Philbert


On 2016-04-06 10:24 AM, James Bottomley wrote:
> On Wed, 2016-04-06 at 10:11 -0400, Bastien Philbert wrote:
>>
>> On 2016-04-06 09:38 AM, James Bottomley wrote:
>>> On Wed, 2016-04-06 at 09:21 -0400, Bastien Philbert wrote:
>>>>
>>>> On 2016-04-06 03:48 AM, Julian Calaby wrote:
>>>>> Hi Bastien,
>>>>>
>>>>> On Wed, Apr 6, 2016 at 7:19 AM, Bastien Philbert
>>>>>  wrote:
>>>>>> This fixes backwards locking in the function
>>>>>> __csio_unreg_rnode
>>>>>> to
>>>>>> properly lock before the call to the function
>>>>>> csio_unreg_rnode
>>>>>> and
>>>>>> not unlock with spin_unlock_irq as this would not allow the
>>>>>> proper
>>>>>> protection for concurrent access on the shared csio_hw
>>>>>> structure
>>>>>> pointer hw. In addition switch the locking after the critical
>>>>>> region
>>>>>> function call to properly unlock instead with spin_unlock_irq
>>>>>> on
>>>>>>
>>>>>> Signed-off-by: Bastien Philbert 
>>>>>> ---
>>>>>>  drivers/scsi/csiostor/csio_rnode.c | 4 ++--
>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/scsi/csiostor/csio_rnode.c
>>>>>> b/drivers/scsi/csiostor/csio_rnode.c
>>>>>> index e9c3b04..029a09e 100644
>>>>>> --- a/drivers/scsi/csiostor/csio_rnode.c
>>>>>> +++ b/drivers/scsi/csiostor/csio_rnode.c
>>>>>> @@ -580,9 +580,9 @@ __csio_unreg_rnode(struct csio_rnode *rn)
>>>>>> ln->last_scan_ntgts--;
>>>>>> }
>>>>>>
>>>>>> -   spin_unlock_irq(&hw->lock);
>>>>>> -   csio_unreg_rnode(rn);
>>>>>> spin_lock_irq(&hw->lock);
>>>>>> +   csio_unreg_rnode(rn);
>>>>>> +   spin_unlock_irq(&hw->lock);
>>>>>
>>>>> Are you _certain_ this is correct? This construct usually
>>>>> appears
>>>>> when
>>>>> a function has a particular lock held, then needs to unlock it
>>>>> to
>>>>> call
>>>>> some other function. Are you _certain_ that this isn't the
>>>>> case?
>>>>>
>>>>> Thanks,
>>>>>
>>>> Yes I am pretty certain this is correct. I checked the paths that
>>>> called this function
>>>> and it was weired that none of them gradded the spinlock before
>>>> hand.
>>>
>>> That's not good enough.  If your theory is correct, lockdep should
>>> be
>>> dropping an already unlocked assertion in this codepath ... do you
>>> see
>>> this?
>>>
>>> James
>>>
>>>
>> Yes I do.
> 
> You mean you don't see the lockdep assert, since you're asking to drop
> the patch?
> 
>>  For now just drop the patch but I am still concerned that we are
>> double unlocking here.
> 
> Really, no.  The pattern in the code indicates the lock is expected to
> be held.  This can be wrong (sometimes code moves or people forget),
> but if it is wrong we'll get an assert about unlock of an already
> unlocked lock.  If there's no assert, the lock is held on entry and the
> code is correct.
> 
> You're proposing patches based on misunderstandings of the code which
> aren't backed up by actual issues and wasting everyone's time to look
> at them.  Please begin with the hard evidence of a problem first, so
> post the lockdep assert in the changelog so we know there's a real
> problem.
> 
> James
> 
Certainly James. I think I just got carried away with the last few patches :(.
Bastien


Re: [PATCH] csiostor: Fix backwards locking in the function __csio_unreg_rnode

2016-04-06 Thread Bastien Philbert


On 2016-04-06 09:38 AM, James Bottomley wrote:
> On Wed, 2016-04-06 at 09:21 -0400, Bastien Philbert wrote:
>>
>> On 2016-04-06 03:48 AM, Julian Calaby wrote:
>>> Hi Bastien,
>>>
>>> On Wed, Apr 6, 2016 at 7:19 AM, Bastien Philbert
>>>  wrote:
>>>> This fixes backwards locking in the function __csio_unreg_rnode
>>>> to
>>>> properly lock before the call to the function csio_unreg_rnode
>>>> and
>>>> not unlock with spin_unlock_irq as this would not allow the
>>>> proper
>>>> protection for concurrent access on the shared csio_hw structure
>>>> pointer hw. In addition switch the locking after the critical
>>>> region
>>>> function call to properly unlock instead with spin_unlock_irq on
>>>>
>>>> Signed-off-by: Bastien Philbert 
>>>> ---
>>>>  drivers/scsi/csiostor/csio_rnode.c | 4 ++--
>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/scsi/csiostor/csio_rnode.c
>>>> b/drivers/scsi/csiostor/csio_rnode.c
>>>> index e9c3b04..029a09e 100644
>>>> --- a/drivers/scsi/csiostor/csio_rnode.c
>>>> +++ b/drivers/scsi/csiostor/csio_rnode.c
>>>> @@ -580,9 +580,9 @@ __csio_unreg_rnode(struct csio_rnode *rn)
>>>> ln->last_scan_ntgts--;
>>>> }
>>>>
>>>> -   spin_unlock_irq(&hw->lock);
>>>> -   csio_unreg_rnode(rn);
>>>> spin_lock_irq(&hw->lock);
>>>> +   csio_unreg_rnode(rn);
>>>> +   spin_unlock_irq(&hw->lock);
>>>
>>> Are you _certain_ this is correct? This construct usually appears
>>> when
>>> a function has a particular lock held, then needs to unlock it to
>>> call
>>> some other function. Are you _certain_ that this isn't the case?
>>>
>>> Thanks,
>>>
>> Yes I am pretty certain this is correct. I checked the paths that
>> called this function
>> and it was weired that none of them gradded the spinlock before hand.
> 
> That's not good enough.  If your theory is correct, lockdep should be
> dropping an already unlocked assertion in this codepath ... do you see
> this?
> 
> James
> 
> 
Yes I do. For now just drop the patch but I am still concerned that we are 
double unlocking here.
Bastien
> 
> 


Re: [PATCH] csiostor: Fix backwards locking in the function __csio_unreg_rnode

2016-04-06 Thread Bastien Philbert


On 2016-04-06 03:48 AM, Julian Calaby wrote:
> Hi Bastien,
> 
> On Wed, Apr 6, 2016 at 7:19 AM, Bastien Philbert
>  wrote:
>> This fixes backwards locking in the function __csio_unreg_rnode to
>> properly lock before the call to the function csio_unreg_rnode and
>> not unlock with spin_unlock_irq as this would not allow the proper
>> protection for concurrent access on the shared csio_hw structure
>> pointer hw. In addition switch the locking after the critical region
>> function call to properly unlock instead with spin_unlock_irq on
>>
>> Signed-off-by: Bastien Philbert 
>> ---
>>  drivers/scsi/csiostor/csio_rnode.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/scsi/csiostor/csio_rnode.c 
>> b/drivers/scsi/csiostor/csio_rnode.c
>> index e9c3b04..029a09e 100644
>> --- a/drivers/scsi/csiostor/csio_rnode.c
>> +++ b/drivers/scsi/csiostor/csio_rnode.c
>> @@ -580,9 +580,9 @@ __csio_unreg_rnode(struct csio_rnode *rn)
>> ln->last_scan_ntgts--;
>> }
>>
>> -   spin_unlock_irq(&hw->lock);
>> -   csio_unreg_rnode(rn);
>> spin_lock_irq(&hw->lock);
>> +   csio_unreg_rnode(rn);
>> +   spin_unlock_irq(&hw->lock);
> 
> Are you _certain_ this is correct? This construct usually appears when
> a function has a particular lock held, then needs to unlock it to call
> some other function. Are you _certain_ that this isn't the case?
> 
> Thanks,
> 
Yes I am pretty certain this is correct. I checked the paths that called this 
function
and it was weired that none of them gradded the spinlock before hand.
Cheers,
Bastien


[PATCH] mpt3sas:Make sure mpt3sas adapter has been reset in scsih_resume

2016-04-05 Thread Bastien Philbert
This fixes the issue in the function scih_resume that we assume
that the mpt3sas adapter has been reset properly with a call to
mpt3sas_base_hard_reset_handler by checking if this function
call returns a error code and exit adpater resume prematurely

Signed-off-by: Bastien Philbert 
---
 drivers/scsi/mpt3sas/mpt3sas_scsih.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c 
b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
index e0e4920..be2a7c3 100644
--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
@@ -8892,7 +8892,11 @@ scsih_resume(struct pci_dev *pdev)
if (r)
return r;
 
-   mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, SOFT_RESET);
+   r = mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, SOFT_RESET);
+   if (r) {
+   mpt3sas_base_free_resources(ioc);
+   return r;
+   }
scsi_unblock_requests(shost);
mpt3sas_base_start_watchdog(ioc);
return 0;
-- 
2.5.0



Re: [PATCH] sctp: Fix error handling for switch statement case in the function sctp_cmd_interprete

2016-04-05 Thread Bastien Philbert


On 2016-04-05 07:29 PM, David Miller wrote:
> From: Daniel Borkmann 
> Date: Tue, 05 Apr 2016 23:53:52 +0200
> 
>> On 04/05/2016 11:36 PM, Bastien Philbert wrote:
>>> This fixes error handling for the switch statement case
>>> SCTP_CMD_SEND_PKT by making the error value of the call
>>> to sctp_packet_transmit equal the variable error due to
>>> this function being able to fail with a error code. In
>>
>> What actual issue have you observed that you fix?
>>
>>> addition allow the call to sctp_ootb_pkt_free afterwards
>>> to free up the no longer in use sctp packet even if the
>>> call to the function sctp_packet_transmit fails in order
>>> to avoid a memory leak here for not freeing the sctp
>>
>> Not sure how this relates to your code?
> 
> Bastien, I'm seeing a clear negative pattern with the bug fixes
> you are submitting.
> 
> Just now you submitted the ICMP change which obviously was never
> tested because it tried to take the RTNL mutex in atomic context,
> and now this sctp thing.
> 
> If you don't start actually testing your changes and expalining
> clearly what the problem actually is, how you discovered it,
> and how you actually tested your patch, I will start completely
> ignoring your patch submissions.
> 
Ok sure I will be more careful with my future patches. Sorry about those 
two patches :(.
Bastien


Re: [PATCH] sctp: Fix error handling for switch statement case in the function sctp_cmd_interprete

2016-04-05 Thread Bastien Philbert


On 2016-04-05 06:12 PM, Marcelo Ricardo Leitner wrote:
> On Tue, Apr 05, 2016 at 05:36:41PM -0400, Bastien Philbert wrote:
>> This fixes error handling for the switch statement case
>> SCTP_CMD_SEND_PKT by making the error value of the call
>> to sctp_packet_transmit equal the variable error due to
>> this function being able to fail with a error code. In
>> addition allow the call to sctp_ootb_pkt_free afterwards
>> to free up the no longer in use sctp packet even if the
>> call to the function sctp_packet_transmit fails in order
>> to avoid a memory leak here for not freeing the sctp
> 
> This leak shouldn't exist as sctp_packet_transmit() will free the packet
> if it returns ENOMEM, through the nomem: handling.
> 
> But about making it visible to the user, that looks interesting to me
> although I cannot foresee yet its effects, like the comment at the end
> of sctp_packet_transmit() on not returning EHOSTUNREACH. Did you check
> it?
> 
I was aware of the -EHOSTUNREACH issue but assumed that this needs to be
known to functions internal to the kernel. TO rephase does it matter if
the callers of this function known if sctp_packet_transmit or care if it 
fails or is this just unnecessary as we do cleanup else where which is 
enough so the new error check is not needed? Again if their is a certain
test would like me to run on this patch too to make sure it's OK I don't
mind, just let me known :).
Cheers,
Bastien
>>
>> Signed-off-by: Bastien Philbert 
>> ---
>>  net/sctp/sm_sideeffect.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c
>> index 7fe56d0..f3a8b58 100644
>> --- a/net/sctp/sm_sideeffect.c
>> +++ b/net/sctp/sm_sideeffect.c
>> @@ -1434,7 +1434,7 @@ static int sctp_cmd_interpreter(sctp_event_t 
>> event_type,
>>  case SCTP_CMD_SEND_PKT:
>>  /* Send a full packet to our peer.  */
>>  packet = cmd->obj.packet;
>> -sctp_packet_transmit(packet, gfp);
>> +error = sctp_packet_transmit(packet, gfp);
>>  sctp_ootb_pkt_free(packet);
>>  break;
>>  
>> -- 
>> 2.5.0
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-sctp" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>


Re: [PATCH] sctp: Fix error handling for switch statement case in the function sctp_cmd_interprete

2016-04-05 Thread Bastien Philbert


On 2016-04-05 05:53 PM, Daniel Borkmann wrote:
> On 04/05/2016 11:36 PM, Bastien Philbert wrote:
>> This fixes error handling for the switch statement case
>> SCTP_CMD_SEND_PKT by making the error value of the call
>> to sctp_packet_transmit equal the variable error due to
>> this function being able to fail with a error code. In
> 
> What actual issue have you observed that you fix?
> 
The issue here is basically that sctp_packet_transmit
can return a error if it unsuccessfully transmit the
sk_buff as a parameter. Seems that we should signal
the user/caller(s) when a sctp packet transmission
fails here. If you would like I can resend with a better
commit message in a V2 if this explains the issue better.
Bastien
>> addition allow the call to sctp_ootb_pkt_free afterwards
>> to free up the no longer in use sctp packet even if the
>> call to the function sctp_packet_transmit fails in order
>> to avoid a memory leak here for not freeing the sctp
> 
> Not sure how this relates to your code?
> 
>> Signed-off-by: Bastien Philbert 
>> ---
>>   net/sctp/sm_sideeffect.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c
>> index 7fe56d0..f3a8b58 100644
>> --- a/net/sctp/sm_sideeffect.c
>> +++ b/net/sctp/sm_sideeffect.c
>> @@ -1434,7 +1434,7 @@ static int sctp_cmd_interpreter(sctp_event_t 
>> event_type,
>>   case SCTP_CMD_SEND_PKT:
>>   /* Send a full packet to our peer.  */
>>   packet = cmd->obj.packet;
>> -sctp_packet_transmit(packet, gfp);
>> +error = sctp_packet_transmit(packet, gfp);
>>   sctp_ootb_pkt_free(packet);
>>   break;
>>
>>
> 


[PATCH] sctp: Fix error handling for switch statement case in the function sctp_cmd_interprete

2016-04-05 Thread Bastien Philbert
This fixes error handling for the switch statement case
SCTP_CMD_SEND_PKT by making the error value of the call
to sctp_packet_transmit equal the variable error due to
this function being able to fail with a error code. In
addition allow the call to sctp_ootb_pkt_free afterwards
to free up the no longer in use sctp packet even if the
call to the function sctp_packet_transmit fails in order
to avoid a memory leak here for not freeing the sctp

Signed-off-by: Bastien Philbert 
---
 net/sctp/sm_sideeffect.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c
index 7fe56d0..f3a8b58 100644
--- a/net/sctp/sm_sideeffect.c
+++ b/net/sctp/sm_sideeffect.c
@@ -1434,7 +1434,7 @@ static int sctp_cmd_interpreter(sctp_event_t event_type,
case SCTP_CMD_SEND_PKT:
/* Send a full packet to our peer.  */
packet = cmd->obj.packet;
-   sctp_packet_transmit(packet, gfp);
+   error = sctp_packet_transmit(packet, gfp);
sctp_ootb_pkt_free(packet);
break;
 
-- 
2.5.0



[PATCH] ipv6: icmp: Add protection from concurrent users in the function icmpv6_echo_reply

2016-04-05 Thread Bastien Philbert
This adds protection from concurrenct users in the function
icmpv6_echo_reply around the call to the function __in6_dev_get
by locking/unlocking around this call with calls to the functions
rtnl_lock and rtnl_unlock to protect against concurrent users
when calling this function in icmpv6_echo_reply as stated in the
comments for locking requirements for the function, __in6_dev_get.

Signed-off-by: Bastien Philbert 
---
 net/ipv6/icmp.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
index 0a37ddc..798434f 100644
--- a/net/ipv6/icmp.c
+++ b/net/ipv6/icmp.c
@@ -607,7 +607,9 @@ static void icmpv6_echo_reply(struct sk_buff *skb)
 
hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
 
+   rtnl_lock();
idev = __in6_dev_get(skb->dev);
+   rtnl_unlock();
 
msg.skb = skb;
msg.offset = 0;
-- 
2.5.0



[PATCH] csiostor: Fix backwards locking in the function __csio_unreg_rnode

2016-04-05 Thread Bastien Philbert
This fixes backwards locking in the function __csio_unreg_rnode to
properly lock before the call to the function csio_unreg_rnode and
not unlock with spin_unlock_irq as this would not allow the proper
protection for concurrent access on the shared csio_hw structure
pointer hw. In addition switch the locking after the critical region
function call to properly unlock instead with spin_unlock_irq on

Signed-off-by: Bastien Philbert 
---
 drivers/scsi/csiostor/csio_rnode.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/scsi/csiostor/csio_rnode.c 
b/drivers/scsi/csiostor/csio_rnode.c
index e9c3b04..029a09e 100644
--- a/drivers/scsi/csiostor/csio_rnode.c
+++ b/drivers/scsi/csiostor/csio_rnode.c
@@ -580,9 +580,9 @@ __csio_unreg_rnode(struct csio_rnode *rn)
ln->last_scan_ntgts--;
}
 
-   spin_unlock_irq(&hw->lock);
-   csio_unreg_rnode(rn);
spin_lock_irq(&hw->lock);
+   csio_unreg_rnode(rn);
+   spin_unlock_irq(&hw->lock);
 
/* Cleanup I/Os that were waiting for rnode to unregister */
if (cmpl)
-- 
2.5.0



Re: [PATCH RFC] select_idle_sibling experiments

2016-04-05 Thread Bastien Philbert


On 2016-04-05 04:03 PM, Matt Fleming wrote:
> On Tue, 05 Apr, at 02:08:22PM, Chris Mason wrote:
>>
>> I started with a small-ish program to benchmark wakeup latencies.  The
>> basic idea is a bunch of worker threads who sit around and burn CPU.
>> Every once and a while they send a message to a message thread.
>  
> This reminds me of something I've been looking at recently; a similar
> workload in Mel's mmtests based on pgbench with 1-client that also has
> this problem of cpu_idle() being false at an inconvenient time in
> select_idle_sibling(), so we move the task off the cpu and the cpu
> then immediately goes idle.
> 
> This leads to tasks bouncing around the socket as we search for idle
> cpus.
> 
>> It has knobs for cpu think time, and for how long the messenger thread
>> waits before replying.  Here's how I'm running it with my patch:
>  
> [...]
> 
> Cool, I'll go have a play with this.
> 
>> Now, on to the patch.  I pushed some code around and narrowed the
>> problem down to select_idle_sibling()   We have cores going into and out
>> of idle fast enough that even this cut our latencies in half:
>>
>> static int select_idle_sibling(struct task_struct *p, int target)
>> goto next;
>>  
>> for_each_cpu(i, sched_group_cpus(sg)) {
>> -   if (i == target || !idle_cpu(i))
>> +   if (!idle_cpu(i))
>> goto next;
>> }
>>  
>> IOW, by the time we get down to for_each_cpu(), the idle_cpu() check
>> done at the top of the function is no longer valid.
>  
> Yeah. The problem is that because we're racing with the cpu going in
> and out of idle, and since you're exploiting that race condition, this
> is highly tuned to your specific workload.
> 
> Which is a roundabout way of saying, this is probably going to
> negatively impact other workloads.
> 
>> I tried a few variations on select_idle_sibling() that preserved the
>> underlying goal of returning idle cores before idle SMT threads.  They
>> were all horrible in different ways, and none of them were fast.
>  
> I toyed with ignoring cpu_idle() in select_idle_sibling() for my
> workload. That actually was faster ;)
> 
>> The patch below just makes select_idle_sibling pick the first idle
>> thread it can find.  When I ran it through production workloads here, it
>> was faster than the patch we've been carrying around for the last few
>> years.
> 
> It would be really nice if we had a lightweight way to gauge the
> "idleness" of a cpu, and whether we expect it to be idle again soon.
> 
The best way to do this is either embed it in a already used structure to
allow us to check it quickly. Otherwise I am curious if writing a marco
may prove useful for this. Seems that idleness checking needs to accounted
for when scheduling, in order to make this lightweight enough to avoid using
it during a context switch, the challenge however is to make the reference
counting lightweight enough to out weight it being done during current 
scheduling
functions.
> Failing that, could we just force the task onto 'target' when it makes
> sense and skip the idle search (and the race) altogether?
> 
Doesn't this possibly cause a context switch or even a extensive move to another
CPU instruction(s) on certain architectures. Seems we need to add reference 
counting
or tracking of idle CPUS somewhere.
Bastien 


Re: [PATCH RFC] select_idle_sibling experiments

2016-04-05 Thread Bastien Bastien Philbert
On Tue, Apr 5, 2016 at 2:08 PM, Chris Mason  wrote:
> Hi everyone,
>
> We're porting the fb kernel up to 4.5, and one of our last few out-of-tree
> patches is a hack to try harder to find idle cpus when waking up tasks.
> This helps in pretty much every workload we run, mostly because they all
> get tuned with a similar setup:
>
> 1) find the load where latencies stop being acceptable
> 2) Run the server at just a little less than that
>
> Usually this means our CPUs are just a little bit idle, and a poor
> scheduler decision to place a task on a busy CPU instead of an idle CPU
> ends up impacting our p99 latencies.
>
> Mike helped us with this last year, fixing up wake_wide() to improve
> things.  But we still ended up having to go back to the old hack.
>
> I started with a small-ish program to benchmark wakeup latencies.  The
> basic idea is a bunch of worker threads who sit around and burn CPU.
> Every once and a while they send a message to a message thread.
>
> The message thread records the time he woke up the worker, and the
> worker records the delta between that time and the time he actually got
> the CPU again.  At the end it spits out a latency histogram.  The only
> thing we record is the wakeup latency, there's no measurement of 'work
> done' or any of the normal things you'd expect in a benchmark.
>
> It has knobs for cpu think time, and for how long the messenger thread
> waits before replying.  Here's how I'm running it with my patch:
>
> ./schbench -c 3 -s 3 -m 6 -t 24 -r 30
> Latency percentiles (usec)
> 50.th: 50
> 75.th: 62
> 90.th: 73
> 95.th: 79
> *99.th: 99
> 99.5000th: 761
> 99.9000th: 10160
> Over=0, min=0, max=14659
>
> This translates to cputime of 30ms, sleep time of 30ms, 6 messenger
> threads, 24 workers per messenger and a run time of 30 seconds.  My box
> has two sockets, 24 cores each.  Mainline varies a bit, but numbers like
> this are typical:
>
>  ./schbench -c 3 -s 3 -m 6 -t 24 -r 30
> Latency percentiles (usec)
> 50.th: 50
> 75.th: 63
> 90.th: 76
> 95.th: 85
> *99.th: 4680
> 99.5000th: 10192
> 99.9000th: 10928
> Over=0, min=0, max=21816
>
> A high p99 in real application performance will block a new kernel for
> us.  p99.5 and p99.9 are included just to show how long the tail really
> is.
>
> I've inlined schbench.c below and attached as a .gz file just in case
> exchange manages to munge it.
>
> Now, on to the patch.  I pushed some code around and narrowed the
> problem down to select_idle_sibling()   We have cores going into and out
> of idle fast enough that even this cut our latencies in half:
>
> static int select_idle_sibling(struct task_struct *p, int target)
> goto next;
>
> for_each_cpu(i, sched_group_cpus(sg)) {
> -   if (i == target || !idle_cpu(i))
> +   if (!idle_cpu(i))
> goto next;
> }
>
> IOW, by the time we get down to for_each_cpu(), the idle_cpu() check
> done at the top of the function is no longer valid.
>
> I tried a few variations on select_idle_sibling() that preserved the
> underlying goal of returning idle cores before idle SMT threads.  They
> were all horrible in different ways, and none of them were fast.
>
> The patch below just makes select_idle_sibling pick the first idle
> thread it can find.  When I ran it through production workloads here, it
> was faster than the patch we've been carrying around for the last few
> years.
>
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 56b7d4b..c41baa6 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4974,7 +4974,6 @@ find_idlest_cpu(struct sched_group *group, struct 
> task_struct *p, int this_cpu)
>  static int select_idle_sibling(struct task_struct *p, int target)
>  {
> struct sched_domain *sd;
> -   struct sched_group *sg;
> int i = task_cpu(p);
>
> if (idle_cpu(target))
> @@ -4990,24 +4989,14 @@ static int select_idle_sibling(struct task_struct *p, 
> int target)
>  * Otherwise, iterate the domains and find an elegible idle cpu.
>  */
> sd = rcu_dereference(per_cpu(sd_llc, target));
> -   for_each_lower_domain(sd) {
> -   sg = sd->groups;
> -   do {
> -   if (!cpumask_intersects(sched_group_cpus(sg),
> -   tsk_cpus_allowed(p)))
> -   goto next;
> -
> -   for_each_cpu(i, sched_group_cpus(sg)) {
> -   if (i == target || !idle_cpu(i))
> -   goto next;
> -   }
> +   if (!sd)
> +   goto done;
>
> -   

Re: [PATCH] bnx2fc: Fix locking requirements in bnx2fc_init_tgt

2016-04-05 Thread Bastien Bastien Philbert
On Tue, Apr 5, 2016 at 1:43 PM, Chad Dupuis  wrote:
>
>
> On Tue, 5 Apr 2016, Bastien Philbert wrote:
>
>> This fixes the locking around the call to bnx2fc_alloc_id to comply
>> with the comments about this particular function's definition about
>> requiring the need to hold the hba mutex before and after calling
>> it.
>> Signed-off-by: Bastien Philbert 
>> ---
>> drivers/scsi/bnx2fc/bnx2fc_tgt.c | 6 +-
>> 1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/scsi/bnx2fc/bnx2fc_tgt.c
>> b/drivers/scsi/bnx2fc/bnx2fc_tgt.c
>> index 08ec318..f2988cd 100644
>> --- a/drivers/scsi/bnx2fc/bnx2fc_tgt.c
>> +++ b/drivers/scsi/bnx2fc/bnx2fc_tgt.c
>> @@ -368,9 +368,13 @@ static int bnx2fc_init_tgt(struct bnx2fc_rport *tgt,
>> return -1;
>> }
>>
>> +   mutex_lock(&hba->hba_mutex);
>> tgt->fcoe_conn_id = bnx2fc_alloc_conn_id(hba, tgt);
>> -   if (tgt->fcoe_conn_id == -1)
>> +   if (tgt->fcoe_conn_id == -1) {
>> +   mutex_unlock(&hba->hba_mutex);
>> return -1;
>> +   }
>> +   mutex_unlock(&hba->hba_mutex);
>>
>> BNX2FC_TGT_DBG(tgt, "init_tgt - conn_id = 0x%x\n",
>> tgt->fcoe_conn_id);
>>
>>
>
> Taking the mutex here is not needed as it is already taken in
> bnx2fc_rport_event_handler() in the call chain
> bnx2fc_rport_event_handler->bnx2fc_offload_session->bnx2fc_init_tgt->bnx2fc_alloc_conn_id
Ok thanks and good to known.
Bastien


[PATCH] bnx2fc: Fix locking requirements in bnx2fc_init_tgt

2016-04-05 Thread Bastien Philbert
This fixes the locking around the call to bnx2fc_alloc_id to comply
with the comments about this particular function's definition about
requiring the need to hold the hba mutex before and after calling 
it.
Signed-off-by: Bastien Philbert 
---
 drivers/scsi/bnx2fc/bnx2fc_tgt.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/bnx2fc/bnx2fc_tgt.c b/drivers/scsi/bnx2fc/bnx2fc_tgt.c
index 08ec318..f2988cd 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_tgt.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_tgt.c
@@ -368,9 +368,13 @@ static int bnx2fc_init_tgt(struct bnx2fc_rport *tgt,
return -1;
}
 
+   mutex_lock(&hba->hba_mutex);
tgt->fcoe_conn_id = bnx2fc_alloc_conn_id(hba, tgt);
-   if (tgt->fcoe_conn_id == -1)
+   if (tgt->fcoe_conn_id == -1) {
+   mutex_unlock(&hba->hba_mutex);
return -1;
+   }
+   mutex_unlock(&hba->hba_mutex);
 
BNX2FC_TGT_DBG(tgt, "init_tgt - conn_id = 0x%x\n", tgt->fcoe_conn_id);
 
-- 
2.5.0



[PATCH] ipsec: Fix error handling in the function xfrm6_get_addr

2016-04-05 Thread Bastien Philbert
This fixes the error handling in the function xfrm_get_addr by
checking if the call to ipv6_dev_get_saddr has failed and if
so return -EADDRNOTAVAIL to signal to the caller of xfrm6_get_addr
that the call has failed the error should be handled by the caller
while also freeing the dst_entry structure pointer in use by this

Signed-off-by: Bastien Philbert 
---
 net/ipv6/xfrm6_policy.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/net/ipv6/xfrm6_policy.c b/net/ipv6/xfrm6_policy.c
index c074771..759473e 100644
--- a/net/ipv6/xfrm6_policy.c
+++ b/net/ipv6/xfrm6_policy.c
@@ -64,7 +64,10 @@ static int xfrm6_get_saddr(struct net *net, int oif,
return -EHOSTUNREACH;
 
dev = ip6_dst_idev(dst)->dev;
-   ipv6_dev_get_saddr(dev_net(dev), dev, &daddr->in6, 0, &saddr->in6);
+   if (ipv6_dev_get_saddr(dev_net(dev), dev, &daddr->in6, 0, &saddr->in6)) 
{
+   dst_release(dst);
+   return -EADDRNOTAVAIL;
+   }
dst_release(dst);
return 0;
 }
-- 
2.5.0



[PATCH 2/2] gma500: Add new error checking for the function, psb_driver_load

2016-04-04 Thread Bastien Philbert
This adds new error checking to the function, psb_driver_load
for when the function, psb_mmu_inset_pfn_sequence fails to grab
memory for the internal function call to psb_pd_addr_end. In
addition we now implement this by checking for a error code
return  value from the internal call to the function,
psb_mmu_inset_pfn_sequence for the function, psb_driver_load.
If a error code is returned we must jump to a new label,
unlock_err in order to deallocate our usage count by
one for the usage of the semaphore, sem before unloading

Signed-off-by: Bastien Philbert 
---
 drivers/gpu/drm/gma500/psb_drv.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/gma500/psb_drv.c b/drivers/gpu/drm/gma500/psb_drv.c
index 4e1c685..52651fb 100644
--- a/drivers/gpu/drm/gma500/psb_drv.c
+++ b/drivers/gpu/drm/gma500/psb_drv.c
@@ -343,7 +343,10 @@ static int psb_driver_load(struct drm_device *dev, 
unsigned long flags)
  dev_priv->stolen_base >> PAGE_SHIFT,
  pg->gatt_start,
  pg->stolen_size >> PAGE_SHIFT, 0);
+   if (ret)
+   goto unlock_err;
up_read(&pg->sem);
+
 
psb_mmu_set_pd_context(psb_mmu_get_default_pd(dev_priv->mmu), 0);
psb_mmu_set_pd_context(dev_priv->pf_pd, 1);
@@ -405,6 +408,8 @@ static int psb_driver_load(struct drm_device *dev, unsigned 
long flags)
 #endif
/* Intel drm driver load is done, continue doing pvr load */
return 0;
+unlock_err:
+   up_read(&pg->sem);
 out_err:
psb_driver_unload(dev);
return ret;
-- 
2.5.0



[PATCH 1/2] gma500: Add proper use of the variable ret for the function, psb_mmu_inset_pfn_sequence

2016-04-04 Thread Bastien Philbert
This adds proper use of the variable ret by returning it
at the end of the function, psb_mmu_inset_pfn_sequence for
indicating to callers when an error has occurred. Further
more remove the unneeded double setting of ret to the error
code, -ENOMEM after checking if a call to the function,

Signed-off-by: Bastien Philbert 
---
 drivers/gpu/drm/gma500/mmu.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/gma500/mmu.c b/drivers/gpu/drm/gma500/mmu.c
index 0eaf11c..b832397 100644
--- a/drivers/gpu/drm/gma500/mmu.c
+++ b/drivers/gpu/drm/gma500/mmu.c
@@ -677,10 +677,9 @@ int psb_mmu_insert_pfn_sequence(struct psb_mmu_pd *pd, 
uint32_t start_pfn,
do {
next = psb_pd_addr_end(addr, end);
pt = psb_mmu_pt_alloc_map_lock(pd, addr);
-   if (!pt) {
-   ret = -ENOMEM;
+   if (!pt)
goto out;
-   }
+
do {
pte = psb_mmu_mask_pte(start_pfn++, type);
psb_mmu_set_pte(pt, addr, pte);
-- 
2.5.0



[PATCH] bluetooth: Fix locking issues in the function l2cap_connect_cfm

2016-04-04 Thread Bastien Philbert
This fixes a locking issue in the function l2cap_connect_cfm for
not locking the mutex lock for channels on the l2cap_conn structure
pointer conn before calling __l2cap_get_chan_by_dcid as all callers
need to lock and unlock this mutex before calling this function due
to issues with either concurrent users or race conditions arising

Signed-off-by: Bastien Philbert 
---
 net/bluetooth/l2cap_core.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
index eb4f5f2..2ab103e 100644
--- a/net/bluetooth/l2cap_core.c
+++ b/net/bluetooth/l2cap_core.c
@@ -7308,6 +7308,7 @@ static void l2cap_connect_cfm(struct hci_conn *hcon, u8 
status)
struct l2cap_chan *chan, *next;
 
/* Client fixed channels should override server ones */
+   mutex_lock(&conn->chan_lock);
if (__l2cap_get_chan_by_dcid(conn, pchan->scid))
goto next;
 
@@ -7324,6 +7325,7 @@ static void l2cap_connect_cfm(struct hci_conn *hcon, u8 
status)
 
l2cap_chan_unlock(pchan);
 next:
+   mutex_unlock(&conn->chan_lock);
next = l2cap_global_fixed_chan(pchan, hcon);
l2cap_chan_put(pchan);
pchan = next;
-- 
2.5.0



Re: [PATCH] bridge:Fix incorrect variable assignment on error path in br_sysfs_addbr

2016-04-04 Thread Bastien Philbert


On 2016-04-04 04:14 PM, David Miller wrote:
> From: Bastien Philbert 
> Date: Sun,  3 Apr 2016 19:04:26 -0400
> 
>> This fixes the incorrect variable assignment on error path in
>> br_sysfs_addbr for when the call to kobject_create_and_add
>> fails to assign the value of -EINVAL to the returned variable of
>> err rather then incorrectly return zero making callers think this
>> function has succeededed due to the previous assignment being
>> assigned zero when assigning it the successful return value of
>> the call to sysfs_create_group which is zero.
>>
>> Signed-off-by: Bastien Philbert 
> 
> Applied, but please put a space after the subsystem prefix and the
> colon character in your subject lines in the future.
> 
> Doesn't that really look odd to you, the way you did it? "net:Fix"?
> 
> Doesn't it look more natural, and consistent with what all other
> patch submitters do, if it's "net: Fix"?
> 
> Thanks.
> 
Done, sorry about that :(. Will remember for future patches.
Bastien


[PATCH] ext4:Remove unneeded function definition and prototype for ext4_ext_calc_metadata_amount

2016-04-03 Thread Bastien Philbert
This removes the unneeded function definition and prototype for
ext4_ext_calc_metadata_amount in the files ext4.c and extents.c
as there are no more callers of this particular function and thus
it can now be removed without issues in order to remove unnessary
code from ext4 codebase.

Signed-off-by: Bastien Philbert 
---
 fs/ext4/ext4.h|  2 --
 fs/ext4/extents.c | 47 ---
 2 files changed, 49 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index c047435..557cc35 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -3180,8 +3180,6 @@ extern int ext4_convert_unwritten_extents(handle_t 
*handle, struct inode *inode,
  loff_t offset, ssize_t len);
 extern int ext4_map_blocks(handle_t *handle, struct inode *inode,
   struct ext4_map_blocks *map, int flags);
-extern int ext4_ext_calc_metadata_amount(struct inode *inode,
-ext4_lblk_t lblocks);
 extern int ext4_ext_calc_credits_for_single_extent(struct inode *inode,
   int num,
   struct ext4_ext_path *path);
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index 95bf467..e06a09e 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -304,53 +304,6 @@ ext4_force_split_extent_at(handle_t *handle, struct inode 
*inode,
(nofail ? EXT4_GET_BLOCKS_METADATA_NOFAIL:0));
 }
 
-/*
- * Calculate the number of metadata blocks needed
- * to allocate @blocks
- * Worse case is one block per extent
- */
-int ext4_ext_calc_metadata_amount(struct inode *inode, ext4_lblk_t lblock)
-{
-   struct ext4_inode_info *ei = EXT4_I(inode);
-   int idxs;
-
-   idxs = ((inode->i_sb->s_blocksize - sizeof(struct ext4_extent_header))
-   / sizeof(struct ext4_extent_idx));
-
-   /*
-* If the new delayed allocation block is contiguous with the
-* previous da block, it can share index blocks with the
-* previous block, so we only need to allocate a new index
-* block every idxs leaf blocks.  At ldxs**2 blocks, we need
-* an additional index block, and at ldxs**3 blocks, yet
-* another index blocks.
-*/
-   if (ei->i_da_metadata_calc_len &&
-   ei->i_da_metadata_calc_last_lblock+1 == lblock) {
-   int num = 0;
-
-   if ((ei->i_da_metadata_calc_len % idxs) == 0)
-   num++;
-   if ((ei->i_da_metadata_calc_len % (idxs*idxs)) == 0)
-   num++;
-   if ((ei->i_da_metadata_calc_len % (idxs*idxs*idxs)) == 0) {
-   num++;
-   ei->i_da_metadata_calc_len = 0;
-   } else
-   ei->i_da_metadata_calc_len++;
-   ei->i_da_metadata_calc_last_lblock++;
-   return num;
-   }
-
-   /*
-* In the worst case we need a new set of index blocks at
-* every level of the inode's extent tree.
-*/
-   ei->i_da_metadata_calc_len = 1;
-   ei->i_da_metadata_calc_last_lblock = lblock;
-   return ext_depth(inode) + 1;
-}
-
 static int
 ext4_ext_max_entries(struct inode *inode, int depth)
 {
-- 
2.5.0



[PATCH] bridge:Fix incorrect variable assignment on error path in br_sysfs_addbr

2016-04-03 Thread Bastien Philbert
This fixes the incorrect variable assignment on error path in
br_sysfs_addbr for when the call to kobject_create_and_add
fails to assign the value of -EINVAL to the returned variable of
err rather then incorrectly return zero making callers think this
function has succeededed due to the previous assignment being
assigned zero when assigning it the successful return value of
the call to sysfs_create_group which is zero.

Signed-off-by: Bastien Philbert 
---
 net/bridge/br_sysfs_br.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/net/bridge/br_sysfs_br.c b/net/bridge/br_sysfs_br.c
index 6b80914..f4d40ed 100644
--- a/net/bridge/br_sysfs_br.c
+++ b/net/bridge/br_sysfs_br.c
@@ -870,6 +870,7 @@ int br_sysfs_addbr(struct net_device *dev)
 
br->ifobj = kobject_create_and_add(SYSFS_BRIDGE_PORT_SUBDIR, brobj);
if (!br->ifobj) {
+   err = -EINVAL;
pr_info("%s: can't add kobject (directory) %s/%s\n",
__func__, dev->name, SYSFS_BRIDGE_PORT_SUBDIR);
goto out3;
-- 
2.5.0