Yes, not destroying the borrowed object would be an option. I didn't want to go 
down that route as there are multiple reasons why a borrowed object is not 
valid anymore.
E.g. it could be 'consumed', it could be broken, shouldn't be used do to 
maxUsageTime is over, etc. 
That's why I opted for calling create(). But of course, that has other 
potential issues :/

LieGrue,
strub


> Am 28.11.2018 um 04:21 schrieb Phil Steitz <phil.ste...@gmail.com>:
> 
> On 11/26/18 1:23 PM, Phil Steitz wrote:
>> On 11/26/18 8:29 AM, Phil Steitz wrote:
>>> On 11/26/18 6:19 AM, Mark Struberg wrote:
>>>> Hi Phil!
>>> 
>>> Let me start by repeating that other than trying to help diagnose bugs and 
>>> answer user questions, I don't really work on [pool] any more, so I don't 
>>> really have any standing here.  You are the RM, so it is completely up to 
>>> you and the active Commons committers to decide what to do with the code.  
>>> So fine to ignore my comments.
>>>> 
>>>> 
>>>>>   I think that the right answer here is WONT_FIX. That sounds harsh, but 
>>>>> it seems to me
>>>>> that the obvious solution here is for the user to set maxIdle to at least 
>>>>> 1.
>>>> I thought about that as well!
>>>> Setting maxIdle to 1 will make it less likely but a deadlock will STILL 
>>>> happen.
>>> I am not sure this is right.  If maxIdle >0, the fix for POOL-240 (a 
>>> similar liveness issue) should kick in and work. Each time a validation or 
>>> passivation failure occurs on return, there is a call to ensureIdle that 
>>> will create a new instance and add it to the pool if there is capacity to 
>>> do so.
>>>> So I fear we really need to tackle this. Stackoverflow and our own bug 
>>>> tracker is full of such reports :(
>>> 
>>> I see one additional actual report on GKOP (the other linked issue, which 
>>> should be similarly patched if the consensus is to do this for GOP). The 
>>> vast majority of the liveness reports that we have gotten over the years in 
>>> DBCP and pool are just pool exhausted due to failure to return instances.
>>> 
>>> The workaround violating maxIdle will restore liveness, but is likely the 
>>> wrong solution here.  Again, up to you guys to judge. Note that when you 
>>> specify maxIdle equal to 1
>> Crap.  Meant 0 there.
>>> you are telling the pool to destroy all returning instances.  To use the 
>>> pool in this way is silly, IMO.  With your patch, new instances are still 
>>> created for *every* borrow.  The pool and all of its machinery is doing 
>>> nothing other than tracking the number of active instances and making sure 
>>> life cycle events are fired.  If you want the semantics of returning object 
>>> with threads waiting to be that the returning object is passivated, 
>>> activated and handed directly to a waiter without every hitting the idle 
>>> instance pool, that would require rewriting a fair bit of the borrow / 
>>> return code.  It is an interesting idea and would solve the POOL-240 as 
>>> well.
>>> 
>>> One final comment.  If you stick with something like what is in the code 
>>> now, you should make sure to passivate the new instance before putting it 
>>> into the pool.  I just noticed that ensureIdle does not do that, which I 
>>> think is a bug in that method.  So if you want to proceed with this fix, I 
>>> would recommend
>>> 
>>> 1.  Move the ensureIdle activations added in POOL-240 into destroy itself.
>>> 2.  Add passivation to ensureIdle
>>> 3.  Implement corresponding workaround for GKOP
> 
> Sorry, all.  Looking again and thinking a little more, I think the best fix 
> for this is actually to prevent the destroy on return in the first place if 
> there are take waiters.  The maxIdle invariant gets violated with the 
> ensureIdle or direct create() fix anyway and just allowing the returning 
> instance to go into the pool avoids the needless destroy / create sequence.  
> A simple way to do this is just to recode maxIdle before the destroy test in 
> returnObject.
> Instead of
> 
> final int maxIdleSave = getMaxIdle()
> explicitly recode it:
> 
> // If maxIdle is set to 0 and there are take waiters, bump it to 1
> // to preserve liveness.
>  final int maxIdleSave = idleObjects.hasTakeWaiters() ? 
> Math.max(1,getMaxIdle()) : getMaxIdle();
> 
> Phil
> 
>>> 
>>> Phil
>>> 
>>> 
>>>> LieGrue,
>>>> strub
>>>> 
>>>> 
>>>>> Am 23.11.2018 um 16:51 schrieb Phil Steitz<phil.ste...@gmail.com>:
>>>>> 
>>>>> On 11/23/18 2:57 AM, Mark Struberg wrote:
>>>>>> should read: This change (putting a new item back to the idle pool) was 
>>>>>> needed to prevent a dead-lock....
>>>>>> 
>>>>>> *grabbing a fresh coffee* le
>>>>> I am sorry I did not look carefully enough at this issue before reviewing 
>>>>> the change.  After reviewing the DBCP ticket (OP likely unrelated, IMO), 
>>>>> I think that the right answer here is WONT_FIX. That sounds harsh, but it 
>>>>> seems to me that the obvious solution here is for the user to set maxIdle 
>>>>> to at least 1.  What the fix does is effectively that, without changing 
>>>>> the setting.  If waiting threads die or time out while the create is 
>>>>> happening, there will be an idle instance in the pool and for certain one 
>>>>> is being put there on the way to getting checked back out.  See comments 
>>>>> on POOL-327.
>>>>> 
>>>>> If the consensus is to insert this workaround to enable the pool to 
>>>>> retain liveness in the use case, it's probably best to use ensureIdle(1, 
>>>>> false) (see POOL-240).  It could be that just moving the call to 
>>>>> ensureIdle inside destroy would be OK.  But as stated above, this breaks 
>>>>> the maxIdle contract.
>>>>> 
>>>>> I see that your original report / use case here is from DBCP, Mark.  Was 
>>>>> it prepared statements or connections that you were trying to limit to 0 
>>>>> idle?  Is there a reason that just using 1 would not work?
>>>>> 
>>>>> Phil
>>>>> 
>>>>> 
>>>>>>> Am 23.11.2018 um 10:49 schrieb Mark Struberg<strub...@yahoo.de>:
>>>>>>> 
>>>>>>> This change (putting a new item back to the idle pool was needed to 
>>>>>>> prevent a dead-pool
>>>>>>> 
>>>>>>> --------------------------------------------------------------------- 
>>>>>>> To unsubscribe, e-mail:dev-unsubscr...@commons.apache.org
>>>>>>> For additional commands, e-mail:dev-h...@commons.apache.org
>>>>>>> 
>>>>>>> .
>>>>>>> 
>>>>> --------------------------------------------------------------------- 
>>>>> To unsubscribe, e-mail:dev-unsubscr...@commons.apache.org
>>>>> For additional commands, e-mail:dev-h...@commons.apache.org
>>>>> 
>>>> --------------------------------------------------------------------- 
>>>> To unsubscribe, e-mail:dev-unsubscr...@commons.apache.org
>>>> For additional commands, e-mail:dev-h...@commons.apache.org
>>>> 
>>>> 
>>> 
>> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> For additional commands, e-mail: dev-h...@commons.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to