Not to throw fuel on the fire so to say, nor to make any statement about
not wanting anybody to spend time on JDK 9 or 10, but our general thinking
at Twitter is that we'll skip over these versions as well and move straight
to JDK 11 as well.

That said, this is still a bit of an aspiration for us rather than
something we're working on right away in the Hadoop team (there is some
other tech-debt to iron out first before we will get to that)

Cheers,

Joep

On Wed, Nov 7, 2018 at 2:18 AM Steve Loughran <ste...@hortonworks.com>
wrote:

>
> If there are problems w/ JDK11 then we should be talking to oracle about
> them to have them fixed. Is there an ASF JIRA on this issue yet?
>
> As usual, the large physical clusters will be slow to upgrade,
>
> but the smaller cloud ones can get away with being agile, and as I believe
> that YARN does let you run code with a different path to the jvm, people
> can mix things.
> This makes it possible for people to run java 11+ apps even if hadoop
> itself is on java 8.
>
> And this time we may want to think about: which release we declare "ready
> for Java 11", being proactive rather than lagging behind the public
> releases by many years (6=>7, 7=>8). Of course, we'll have to stay with the
> java 8 language for a while, but there's a lot more we can do there in our
> code. I'm currently (HADOOP-14556) embracing Optional, as it makes explicit
> when things are potentially null, and while its  crippled by the java
> language itself (
> http://steveloughran.blogspot.com/2018/10/javas-use-of-checked-exceptions.html
> ), its still something we can embrace (*)
>
>
> Takanobu,
>
> I've been watching the work you, Akira and others have been putting in for
> java 9+ support and its wonderful, If we had an annual award for
> "persevering in the presence of extreme suffering" it'd be the top
> candidate for this year's work.
>
> it means we are lined up to let people run on Hadoop 11 if they want, and
> gives that option of moving to java 11 sooner rather than later. I'm also
> looking at JUnit 5, wondering when I can embrace it fully (i.e. not worry
> about cherry picking code into junit 4 tests)
>
> Thanks for all your work
>
> -Steve
>
> (*) I also have in the test code of that branch a bonding of UG.doAs which
> takes closures
>
>
> https://github.com/steveloughran/hadoop/blob/s3/HADOOP-14556-delegation-token/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/LambdaTestUtils.java#L865
>
>
> lets me do things like
>
>     assertEquals("FS username in doAs()",
>         ALICE,
>         doAs(bobUser, () -> fs.getUsername()))
>
> If someone wants to actually pull this support into UGI itself, happy to
> review. as moving our doAs code to things like bobUser.doAs(() ->
> fs.create(path)) will transform all those UGI code users.
>
> On 6 Nov 2018, at 05:57, Takanobu Asanuma <tasan...@apache.org<mailto:
> tasan...@apache.org>> wrote:
>
> Thanks for your reply, Owen.
>
> That said, I’d be surprised if the work items for JDK 9 and 10 aren’t a
> strict subset of the issues getting to JDK 11.
>
> Most of the issues that we have fixed are subset of the ones of JDK 11. But
> there seem to be some exceptions. HADOOP-15905 is a bug of JDK 9/10 which
> has been fixed in JDK 11. It is difficult to fix it since JDK 9/10 have
> already been EOL. I wonder if we should treat such a kind of error going
> forward.
>
> I've hit at least one pretty serious JVM bug in JDK 11
> Could you please share that detail?
>
> In any case, we should be carefully that what version of Hadoop is ready
> for JDK 11. It will take some time yet. And we also need to keep supporting
> JDK 8 for a while.
>
> Regards,
> - Takanobu
>
>
>
>

Reply via email to