Hi Ismael,

It depends on what you mean by “support”. In general, there won’t be new 
feature releases for 1.X (e.g. Spark 1.7) because all the new features are 
being added to the master branch. However, there is always room for bug fix 
releases if there is a catastrophic bug, and committers can make those at any 
time. In general though, I’d recommend moving workloads to Spark 2.x. We tried 
to make the migration as easy as possible (a few APIs changed, but not many), 
and 2.x has been out for a long time now and is widely used.

We should perhaps write a more explicit maintenance policy, but all of this is 
run based on what committers want to work on; if someone thinks that there’s a 
serious enough issue in 1.6 to update it, they can put together a new release. 
It does help to hear from users about this though, e.g. if you think there’s a 
significant issue that people are missing.

Matei

> On Oct 19, 2017, at 5:20 AM, Ismaël Mejía <ieme...@gmail.com> wrote:
> 
> Hello,
> 
> I noticed that some of the (Big Data / Cloud Managed) Hadoop
> distributions are starting to (phase out / deprecate) Spark 1.x and I
> was wondering if the Spark community has already decided when will it
> end the support for Spark 1.x. I ask this also considering that the
> latest release in the series is already almost one year old. Any idea
> on this ?
> 
> Thanks,
> Ismaël
> 
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> 


---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to