We've dropped Hadoop 1.x support in Spark 2.0. There is also a proposal to drop Hadoop 2.2 and 2.3, i.e. the minimal Hadoop version we support would be Hadoop 2.4. The main advantage is then we'd be able to focus our Jenkins resources (and the associated maintenance of Jenkins) to create builds for Hadoop 2.6/2.7. It is my understanding that all Hadoop vendors have moved away from 2.2/2.3, but there might be some users that are on these older versions.
What do you think about this idea?