[crypto] Updated to Support OpenSSL 1.1.0i

2018-11-27 Thread Alex Remily
Commons Crypto Dev Team,

I modified a copy of the commons crypto codebase to support OpenSSL 1.1.0i
(this was before I saw JIRA CRYPTO-140 "adding suuport for openssl 1.1.0g
in common crypto").  It successfully runs mvn clean test on Mac OS X
10.13.6 and Ubuntu 18.04 with Java 8 and OpenSSL 1.1.0i.  I was unable to
test on Windows, presumably because of CRYPTO-137 Error compiling on Win64
with Mingw.  I did not preserve compatibility with previous versions of
OpenSSL because I don't need it for the project for which I'm using commons
crypto.

Is there is interest in maintaining separate branches for different
versions of OpenSSL or for incorporating some of my 1.1.0i-compatible code
into a master that supports multiple underlying OpenSSL versions?  If so,
I'll fork the repo and send a pull request.  I also need to incorporate
message digest functionality, so if there's interest in expanding that
functionality I'll be glad to share the end product.

Thoughts?

Alex


Re: [crypto] Updated to Support OpenSSL 1.1.0i

2018-11-27 Thread Marcelo Vanzin
Hi Alex,

There have been other attempts at this (e.g.
https://github.com/apache/commons-crypto/pull/90), and my main concern
with those is that they require different builds for OpenSSL 1.0 and
1.1. It seems your approach falls in that bucket, at least without
additional work.

The problem with that is that it becomes super annoying to consume
commons-crypto in other projects. Which maven coordinates do you use?
If you're building something like Spark, which is meant to be deployed
in a bunch of different places with potentially different OpenSSL
versions, how do you even do it?

So IMO any approach to this needs to support both OpenSSL versions in
the same commons-crypto jar file. That doesn't necessarily mean the
same underlying JNI library, although doing that would make creating a
binary releases easier (less build platforms to juggle).

On Tue, Nov 27, 2018 at 1:25 AM Alex Remily  wrote:
>
> Commons Crypto Dev Team,
>
> I modified a copy of the commons crypto codebase to support OpenSSL 1.1.0i
> (this was before I saw JIRA CRYPTO-140 "adding suuport for openssl 1.1.0g
> in common crypto").  It successfully runs mvn clean test on Mac OS X
> 10.13.6 and Ubuntu 18.04 with Java 8 and OpenSSL 1.1.0i.  I was unable to
> test on Windows, presumably because of CRYPTO-137 Error compiling on Win64
> with Mingw.  I did not preserve compatibility with previous versions of
> OpenSSL because I don't need it for the project for which I'm using commons
> crypto.
>
> Is there is interest in maintaining separate branches for different
> versions of OpenSSL or for incorporating some of my 1.1.0i-compatible code
> into a master that supports multiple underlying OpenSSL versions?  If so,
> I'll fork the repo and send a pull request.  I also need to incorporate
> message digest functionality, so if there's interest in expanding that
> functionality I'll be glad to share the end product.
>
> Thoughts?
>
> Alex



-- 
Marcelo

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: [crypto] Updated to Support OpenSSL 1.1.0i

2018-11-27 Thread Alex Remily
Marcelo,

Thanks for the note.  When I started down this path, I was only concerned
with getting things working in my own environment, but after reading the
comments on the pull request at the link, I'm in complete agreement with
your goals here.  I'll have to give it some thought, but at first glance I
don't think it would be too difficult to do a dynamic version lookup of
OpenSSL from which to instantiate the version-compatible commons-crypto
object.  It appears that implementations for 1.0 and 1.1 are already
written, and the challenge is how to best merge them into one build.

Alex

On Tue, Nov 27, 2018 at 11:04 AM Marcelo Vanzin 
wrote:

> Hi Alex,
>
> There have been other attempts at this (e.g.
> https://github.com/apache/commons-crypto/pull/90), and my main concern
> with those is that they require different builds for OpenSSL 1.0 and
> 1.1. It seems your approach falls in that bucket, at least without
> additional work.
>
> The problem with that is that it becomes super annoying to consume
> commons-crypto in other projects. Which maven coordinates do you use?
> If you're building something like Spark, which is meant to be deployed
> in a bunch of different places with potentially different OpenSSL
> versions, how do you even do it?
>
> So IMO any approach to this needs to support both OpenSSL versions in
> the same commons-crypto jar file. That doesn't necessarily mean the
> same underlying JNI library, although doing that would make creating a
> binary releases easier (less build platforms to juggle).
>
> On Tue, Nov 27, 2018 at 1:25 AM Alex Remily  wrote:
> >
> > Commons Crypto Dev Team,
> >
> > I modified a copy of the commons crypto codebase to support OpenSSL
> 1.1.0i
> > (this was before I saw JIRA CRYPTO-140 "adding suuport for openssl 1.1.0g
> > in common crypto").  It successfully runs mvn clean test on Mac OS X
> > 10.13.6 and Ubuntu 18.04 with Java 8 and OpenSSL 1.1.0i.  I was unable to
> > test on Windows, presumably because of CRYPTO-137 Error compiling on
> Win64
> > with Mingw.  I did not preserve compatibility with previous versions of
> > OpenSSL because I don't need it for the project for which I'm using
> commons
> > crypto.
> >
> > Is there is interest in maintaining separate branches for different
> > versions of OpenSSL or for incorporating some of my 1.1.0i-compatible
> code
> > into a master that supports multiple underlying OpenSSL versions?  If so,
> > I'll fork the repo and send a pull request.  I also need to incorporate
> > message digest functionality, so if there's interest in expanding that
> > functionality I'll be glad to share the end product.
> >
> > Thoughts?
> >
> > Alex
>
>
>
> --
> Marcelo
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> For additional commands, e-mail: dev-h...@commons.apache.org
>
>


Re: [VOTE] [CANCEL] Release Apache Commons Pool2 2.6.1

2018-11-27 Thread Phil Steitz

On 11/26/18 1:23 PM, Phil Steitz wrote:

On 11/26/18 8:29 AM, Phil Steitz wrote:

On 11/26/18 6:19 AM, Mark Struberg wrote:

Hi Phil!


Let me start by repeating that other than trying to help diagnose 
bugs and answer user questions, I don't really work on [pool] any 
more, so I don't really have any standing here.  You are the RM, 
so it is completely up to you and the active Commons committers 
to decide what to do with the code.  So fine to ignore my comments.



  I think that the right answer here is WONT_FIX. That sounds 
harsh, but it seems to me
that the obvious solution here is for the user to set maxIdle 
to at least 1.

I thought about that as well!
Setting maxIdle to 1 will make it less likely but a deadlock 
will STILL happen.
I am not sure this is right.  If maxIdle >0, the fix for POOL-240 
(a similar liveness issue) should kick in and work. Each time a 
validation or passivation failure occurs on return, there is a 
call to ensureIdle that will create a new instance and add it to 
the pool if there is capacity to do so.
So I fear we really need to tackle this. Stackoverflow and our 
own bug tracker is full of such reports :(


I see one additional actual report on GKOP (the other linked 
issue, which should be similarly patched if the consensus is to 
do this for GOP). The vast majority of the liveness reports that 
we have gotten over the years in DBCP and pool are just pool 
exhausted due to failure to return instances.


The workaround violating maxIdle will restore liveness, but is 
likely the wrong solution here.  Again, up to you guys to judge. 
Note that when you specify maxIdle equal to 1

Crap.  Meant 0 there.
you are telling the pool to destroy all returning instances.  To 
use the pool in this way is silly, IMO.  With your patch, new 
instances are still created for *every* borrow.  The pool and all 
of its machinery is doing nothing other than tracking the number 
of active instances and making sure life cycle events are fired.  
If you want the semantics of returning object with threads 
waiting to be that the returning object is passivated, activated 
and handed directly to a waiter without every hitting the idle 
instance pool, that would require rewriting a fair bit of the 
borrow / return code.  It is an interesting idea and would solve 
the POOL-240 as well.


One final comment.  If you stick with something like what is in 
the code now, you should make sure to passivate the new instance 
before putting it into the pool.  I just noticed that ensureIdle 
does not do that, which I think is a bug in that method.  So if 
you want to proceed with this fix, I would recommend


1.  Move the ensureIdle activations added in POOL-240 into 
destroy itself.

2.  Add passivation to ensureIdle
3.  Implement corresponding workaround for GKOP


Sorry, all.  Looking again and thinking a little more, I think the 
best fix for this is actually to prevent the destroy on return in 
the first place if there are take waiters.  The maxIdle invariant 
gets violated with the ensureIdle or direct create() fix anyway and 
just allowing the returning instance to go into the pool avoids the 
needless destroy / create sequence.  A simple way to do this is just 
to recode maxIdle before the destroy test in returnObject.

Instead of

final int maxIdleSave = getMaxIdle()
explicitly recode it:

// If maxIdle is set to 0 and there are take waiters, bump it to 1
// to preserve liveness.
 final int maxIdleSave = idleObjects.hasTakeWaiters() ? 
Math.max(1,getMaxIdle()) : getMaxIdle();


Phil



Phil



LieGrue,
strub



Am 23.11.2018 um 16:51 schrieb Phil Steitz:

On 11/23/18 2:57 AM, Mark Struberg wrote:
should read: This change (putting a new item back to the idle 
pool) was needed to prevent a dead-lock


*grabbing a fresh coffee* le
I am sorry I did not look carefully enough at this issue before 
reviewing the change.  After reviewing the DBCP ticket (OP 
likely unrelated, IMO), I think that the right answer here is 
WONT_FIX. That sounds harsh, but it seems to me that the 
obvious solution here is for the user to set maxIdle to at 
least 1.  What the fix does is effectively that, without 
changing the setting.  If waiting threads die or time out while 
the create is happening, there will be an idle instance in the 
pool and for certain one is being put there on the way to 
getting checked back out.  See comments on POOL-327.


If the consensus is to insert this workaround to enable the 
pool to retain liveness in the use case, it's probably best to 
use ensureIdle(1, false) (see POOL-240).  It could be that just 
moving the call to ensureIdle inside destroy would be OK.  But 
as stated above, this breaks the maxIdle contract.


I see that your original report / use case here is from DBCP, 
Mark.  Was it prepared statements or connections that you were 
trying to limit to 0 idle?  Is there a reason that just using 1 
would not work?


Phil



Am 23.11.2018 um 10:49 schrieb Mark Struberg:

This change