We should only make breaking changes when we have a strong reason to do so — 
otherwise, it’s fine to stay on 2.x for a while. For example, maybe there’s a 
way to support Hadoop 3.0 from Spark 2.x as well. So far, none of the JIRAs 
targeting 3.0 seem that compelling, though I could be missing something. The 
most serious ones are probably the ones regarding dependencies that we’re 
forced to pull in — it would be great to minimize those.

Matei

> On Jan 19, 2018, at 10:26 AM, Reynold Xin <r...@databricks.com> wrote:
> 
> We can certainly provide a build for Scala 2.12, even in 2.x.
> 
> 
> On Fri, Jan 19, 2018 at 10:17 AM, Justin Miller 
> <justin.mil...@protectwise.com> wrote:
> Would that mean supporting both 2.12 and 2.11? Could be a while before some 
> of our libraries are off of 2.11.
> 
> Thanks,
> Justin
> 
> 
>> On Jan 19, 2018, at 10:53 AM, Koert Kuipers <ko...@tresata.com> wrote:
>> 
>> i was expecting to be able to move to scala 2.12 sometime this year
>> 
>> if this cannot be done in spark 2.x then that could be a compelling reason 
>> to move spark 3 up to 2018 i think
>> 
>> hadoop 3 sounds great but personally i have no use case for it yet
>> 
>> On Fri, Jan 19, 2018 at 12:31 PM, Sean Owen <so...@cloudera.com> wrote:
>> Forking this thread to muse about Spark 3. Like Spark 2, I assume it would 
>> be more about making all those accumulated breaking changes and updating 
>> lots of dependencies. Hadoop 3 looms large in that list as well as Scala 
>> 2.12.
>> 
>> Spark 1 was release in May 2014, and Spark 2 in July 2016. If Spark 2.3 is 
>> out in Feb 2018 and it takes the now-usual 6 months until a next release, 
>> Spark 3 could reasonably be next.
>> 
>> However the release cycles are naturally slowing down, and it could also be 
>> said that 2019 would be more on schedule for Spark 3.
>> 
>> Nothing particularly urgent about deciding, but I'm curious if anyone had an 
>> opinion on whether to move on to Spark 3 next or just continue with 2.4 
>> later this year.
>> 
>> On Fri, Jan 19, 2018 at 11:13 AM Sean Owen <so...@cloudera.com> wrote:
>> Yeah, if users are using Kryo directly, they should be insulated from a 
>> Spark-side change because of shading.
>> However this also entails updating (unshaded) Chill from 0.8.x to 0.9.x. I 
>> am not sure if that causes problems for apps.
>> 
>> Normally I'd avoid any major-version change in a minor release. This one 
>> looked potentially entirely internal.
>> I think if there are any doubts, we can leave it for Spark 3. There was a 
>> bug report that needed a fix from Kryo 4, but it might be minor after all.
>> 
>> 
> 
> 


---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to