> On 25 Mar 2016, at 01:59, Mridul Muralidharan <mri...@gmail.com> wrote:
> 
> Removing compatibility (with jdk, etc) can be done with a major release- 
> given that 7 has been EOLed a while back and is now unsupported, we have to 
> decide if we drop support for it in 2.0 or 3.0 (2+ years from now).
> 
> Given the functionality & performance benefits of going to jdk8, future 
> enhancements relevant in 2.x timeframe ( scala, dependencies) which requires 
> it, and simplicity wrt code, test & support it looks like a good checkpoint 
> to drop jdk7 support.
> 
> As already mentioned in the thread, existing yarn clusters are unaffected if 
> they want to continue running jdk7 and yet use spark2 (install jdk8 on all 
> nodes and use it via JAVA_HOME, or worst case distribute jdk8 as archive - 
> suboptimal).

you wouldn't want to dist it as an archive; it's not just the binaries, it's 
the install phase. And you'd better remember to put the JCE jar in on top of 
the JDK for kerberos to work.

setting up environment vars to point to JDK8 in the launched app/container 
avoids that. Yes, the ops team do need to install java, but if you offer them 
the choice of "installing a centrally managed Java" and "having my code try and 
install it", they should go for the managed option.

One thing to consider for 2.0 is to make it easier to set up those env vars for 
both python and java. And, as the techniques for mixing JDK versions is clearly 
not that well known, documenting it. 

(FWIW I've done code which even uploads it's own hadoop-* JAR, but what gets 
you is changes in the hadoop-native libs; you do need to get the PATH var spot 
on)


> I am unsure about mesos (standalone might be easier upgrade I guess ?).
> 
> 
> Proposal is for 1.6x line to continue to be supported with critical fixes; 
> newer features will require 2.x and so jdk8
> 
> Regards 
> Mridul 
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to