I don't imagine that can be guaranteed to be supported anyway... the
0.x branch has never necessarily worked with Spark, even if it might
happen to. Is this really something you would veto for everyone
because of your deployment?

On Fri, Jun 12, 2015 at 7:18 PM, Thomas Dudziak <tom...@gmail.com> wrote:
> -1 to this, we use it with an old Hadoop version (well, a fork of an old
> version, 0.23). That being said, if there were a nice developer api that
> separates Spark from Hadoop (or rather, two APIs, one for scheduling and one
> for HDFS), then we'd be happy to maintain our own plugins for those.
>
> cheers,
> Tom
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to