So does the RM documentation need update to tell the RM that all supported 
configurations should be tested? Should the release script run through them? 

This is new to me because I’ve been on branch-1 forever so it’s like newbie 
feedback. :-) Java 11 and Hadoop 3 stuff being a potential blocker is 
surprising. I missed the earlier discussion. 


> On Dec 9, 2020, at 5:22 PM, Nick Dimiduk <ndimi...@apache.org> wrote:
> 
> On Wed, Dec 9, 2020 at 5:11 PM Andrew Purtell <andrew.purt...@gmail.com>
> wrote:
> 
>> Given the nature of this issue I’d ask you to try Duo’s suggestion and if
>> an earlier version of Hadoop 3 succeeds that this be sufficient this time
>> around.
>> 
> 
> The only version of Hadoop3 we've supported for JDK11 is Hadoop3.2. Hadoop3
> version specified in the pom has not changed since 2.3.0. I can try the
> local build against the newer Hadoop 3.3.0, but that doesn't change this
> being a regression.
> 
> It also occurs to me that perhaps classpath order is at play here, and
> there are different versions of maven in different environments. I haven't
> investigated this yet.
> 
> All,
>> 
>> I will start a DISCUSS thread as follow up as to what should be considered
>> required and veto worthy for a RC and what should not, with regard to
>> optional build profiles. In my opinion ‘required‘ should be defined as what
>> is enabled by default in the release build, and ‘optional’ is everything
>> else until we agree to specifically include one or more optional build
>> profiles to the required list.
>> 
> 
> To the best of my knowledge, JDK11 is not an optional release profile.
> Preliminary support for JDK11 was introduced in 2.3 as a supported
> configuration, and both our book and our build infrastructure were extended
> accordingly. My understanding is that this discussion was settled in the
> lead up to 2.3.0.
> 
> https://hbase.apache.org/book.html#java
> https://hbase.apache.org/book.html#hadoop
> 
>>> On Dec 9, 2020, at 4:31 PM, 张铎 <palomino...@gmail.com> wrote:
>>> 
>>> OK, I think the problem is a bug in jetty 9.3, the JavaVersion
>>> implementation for 9.3 and 9.4 are completely different, there is no
>>> problem to parse 11.0.9.1 for jetty 9.4 but for jetty 9.3, it can only
>> pass
>>> version with two dots, i.e, 11.0.9.
>>> 
>>> I think you could add -Dhadoop-three.version to set the hadoop 3 version
>>> explicitly to a newer version which uses jetty 9.4 to solve the problem,
>>> IIRC all the newest release for each active release line has upgrade to
>>> jetty 9.4 and that's why we need to shade jetty as jetty 9.3 and 9.4 are
>>> incompatible.
>>> 
>>> Thanks.
>>> 
>>> 张铎(Duo Zhang) <palomino...@gmail.com> 于2020年12月10日周四 上午8:21写道:
>>> 
>>>> On nightly jdk11 build the jdk version is
>>>> 
>>>> AdoptOpenJDK-11.0.6+10
>>>> 
>>>> Andrew Purtell <apurt...@apache.org> 于2020年12月10日周四 上午7:21写道:
>>>> 
>>>>> Let me rephrase.
>>>>> 
>>>>> I'm all for testing the optional configurations. I'm less supportive of
>>>>> vetoing releases when an optional configuration has some issue due to a
>>>>> third party component. I would like to see us veto only on the required
>>>>> configurations, and otherwise file JIRAs to fix up the nits on the
>>>>> optional
>>>>> ones.
>>>>> 
>>>>> 
>>>>> On Wed, Dec 9, 2020 at 3:19 PM Andrew Purtell <apurt...@apache.org>
>>>>> wrote:
>>>>> 
>>>>>>> parseJDK9:71, JavaVersion (org.eclipse.jetty.util)
>>>>>> 
>>>>>> So unless I am mistaken, some Jetty utility class is not able to parse
>>>>> the
>>>>>> version string of your particular JDK/JRE.
>>>>>> 
>>>>>> We can try to downgrade Jetty but I am not sure in general how we are
>>>>>> supposed to take on the risk of third party dependencies doing the
>> wrong
>>>>>> thing in an optional configuration. I for one do not want to deal
>> with a
>>>>>> combinatorial explosion of transitive dependencies when releasing.
>>>>>> 
>>>>>> 
>>>>>> On Wed, Dec 9, 2020 at 2:41 PM Nick Dimiduk <ndimi...@apache.org>
>>>>> wrote:
>>>>>> 
>>>>>>> This is coming out of Jetty + Hadoop. This build has a regression in
>>>>> our
>>>>>>> JDK11 support. Or perhaps there's a regression in the upstream Hadoop
>>>>>>> against which JDK11 builds.
>>>>>>> 
>>>>>>> I'm afraid I must vote -1 until we can sort out the issue. I'd
>>>>> appreciate
>>>>>>> it if someone else can attempt to repro, help ensure it's not just
>> me.
>>>>>>> 
>>>>>>> Thanks,
>>>>>>> Nick
>>>>>>> 
>>>>>>> (Apologies for the crude stack trace; this is copied out of an
>> attached
>>>>>>> debugger)
>>>>>>> 
>>>>>>> parseJDK9:71, JavaVersion (org.eclipse.jetty.util)
>>>>>>> parse:49, JavaVersion (org.eclipse.jetty.util)
>>>>>>> <clinit>:43, JavaVersion (org.eclipse.jetty.util)
>>>>>>> findAndFilterContainerPaths:185, WebInfConfiguration
>>>>>>> (org.eclipse.jetty.webapp)
>>>>>>> preConfigure:155, WebInfConfiguration (org.eclipse.jetty.webapp)
>>>>>>> preConfigure:485, WebAppContext (org.eclipse.jetty.webapp)
>>>>>>> doStart:521, WebAppContext (org.eclipse.jetty.webapp)
>>>>>>> start:68, AbstractLifeCycle (org.eclipse.jetty.util.component)
>>>>>>> start:131, ContainerLifeCycle (org.eclipse.jetty.util.component)
>>>>>>> doStart:113, ContainerLifeCycle (org.eclipse.jetty.util.component)
>>>>>>> doStart:61, AbstractHandler (org.eclipse.jetty.server.handler)
>>>>>>> start:68, AbstractLifeCycle (org.eclipse.jetty.util.component)
>>>>>>> start:131, ContainerLifeCycle (org.eclipse.jetty.util.component)
>>>>>>> start:427, Server (org.eclipse.jetty.server)
>>>>>>> doStart:105, ContainerLifeCycle (org.eclipse.jetty.util.component)
>>>>>>> doStart:61, AbstractHandler (org.eclipse.jetty.server.handler)
>>>>>>> doStart:394, Server (org.eclipse.jetty.server)
>>>>>>> start:68, AbstractLifeCycle (org.eclipse.jetty.util.component)
>>>>>>> start:1140, HttpServer2 (org.apache.hadoop.http)
>>>>>>> start:177, NameNodeHttpServer
>> (org.apache.hadoop.hdfs.server.namenode)
>>>>>>> startHttpServer:872, NameNode
>> (org.apache.hadoop.hdfs.server.namenode)
>>>>>>> initialize:694, NameNode (org.apache.hadoop.hdfs.server.namenode)
>>>>>>> <init>:940, NameNode (org.apache.hadoop.hdfs.server.namenode)
>>>>>>> <init>:913, NameNode (org.apache.hadoop.hdfs.server.namenode)
>>>>>>> createNameNode:1646, NameNode
>> (org.apache.hadoop.hdfs.server.namenode)
>>>>>>> createNameNode:1314, MiniDFSCluster (org.apache.hadoop.hdfs)
>>>>>>> configureNameService:1083, MiniDFSCluster (org.apache.hadoop.hdfs)
>>>>>>> createNameNodesAndSetConf:958, MiniDFSCluster
>> (org.apache.hadoop.hdfs)
>>>>>>> initMiniDFSCluster:890, MiniDFSCluster (org.apache.hadoop.hdfs)
>>>>>>> <init>:518, MiniDFSCluster (org.apache.hadoop.hdfs)
>>>>>>> build:477, MiniDFSCluster$Builder (org.apache.hadoop.hdfs)
>>>>>>> startMiniDFSCluster:108, AsyncFSTestBase
>>>>>>> (org.apache.hadoop.hbase.io.asyncfs)
>>>>>>> setUp:87, TestFanOutOneBlockAsyncDFSOutput
>>>>>>> (org.apache.hadoop.hbase.io.asyncfs)
>>>>>>> invoke0:-1, NativeMethodAccessorImpl (jdk.internal.reflect)
>>>>>>> invoke:62, NativeMethodAccessorImpl (jdk.internal.reflect)
>>>>>>> invoke:43, DelegatingMethodAccessorImpl (jdk.internal.reflect)
>>>>>>> invoke:566, Method (java.lang.reflect)
>>>>>>> runReflectiveCall:59, FrameworkMethod$1 (org.junit.runners.model)
>>>>>>> run:12, ReflectiveCallable (org.junit.internal.runners.model)
>>>>>>> invokeExplosively:56, FrameworkMethod (org.junit.runners.model)
>>>>>>> invokeMethod:33, RunBefores (org.junit.internal.runners.statements)
>>>>>>> evaluate:24, RunBefores (org.junit.internal.runners.statements)
>>>>>>> evaluate:27, RunAfters (org.junit.internal.runners.statements)
>>>>>>> evaluate:38, SystemExitRule$1 (org.apache.hadoop.hbase)
>>>>>>> call:288, FailOnTimeout$CallableStatement
>>>>>>> (org.junit.internal.runners.statements)
>>>>>>> call:282, FailOnTimeout$CallableStatement
>>>>>>> (org.junit.internal.runners.statements)
>>>>>>> run:264, FutureTask (java.util.concurrent)
>>>>>>> run:834, Thread (java.lang)
>>>>>>> 
>>>>>>> On Wed, Dec 9, 2020 at 2:08 PM Nick Dimiduk <ndimi...@apache.org>
>>>>> wrote:
>>>>>>> 
>>>>>>>> On Mon, Dec 7, 2020 at 1:51 PM Nick Dimiduk <ndimi...@apache.org>
>>>>>>> wrote:
>>>>>>>> 
>>>>>>>>> Has anyone successfully built/run this RC with JDK11 and Hadoop3
>>>>>>> profile?
>>>>>>>>> I'm seeing test failures locally in the hbase-asyncfs module.
>>>>>>>>> Reproducible with:
>>>>>>>>> 
>>>>>>>>> $
>>>>>>>>> 
>>>>>>> 
>>>>> 
>> JAVA_HOME=/Library/Java/JavaVirtualMachines/adoptopenjdk-11.jdk/Contents/Home
>>>>>>>>> mvn clean install -Dhadoop.profile=3.0
>>>>>>>>> 
>>>>>>> 
>>>>> 
>> -Dtest=org.apache.hadoop.hbase.io.asyncfs.TestFanOutOneBlockAsyncDFSOutput
>>>>>>>>> ...
>>>>>>>>> [INFO] Running
>>>>>>>>> org.apache.hadoop.hbase.io
>> .asyncfs.TestFanOutOneBlockAsyncDFSOutput
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time
>>>>> elapsed:
>>>>>>>>> 1.785 s <<< FAILURE! - in
>>>>>>>>> org.apache.hadoop.hbase.io
>> .asyncfs.TestFanOutOneBlockAsyncDFSOutput
>>>>>>>>> 
>>>>>>>>> [ERROR]
>>>>>>>>> org.apache.hadoop.hbase.io
>> .asyncfs.TestFanOutOneBlockAsyncDFSOutput
>>>>>>> Time
>>>>>>>>> elapsed: 1.775 s  <<< ERROR!
>>>>>>>>> 
>>>>>>>>> java.lang.ExceptionInInitializerError
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>       at
>>>>>>>>> org.apache.hadoop.hbase.io
>>>>>>> 
>>>>> 
>> .asyncfs.TestFanOutOneBlockAsyncDFSOutput.setUp(TestFanOutOneBlockAsyncDFSOutput.java:87)
>>>>>>>>> 
>>>>>>>>> Caused by: java.lang.IllegalArgumentException: Invalid Java version
>>>>>>>>> 11.0.9.1
>>>>>>>>> 
>>>>>>>>>       at
>>>>>>>>> org.apache.hadoop.hbase.io
>>>>>>> 
>>>>> 
>> .asyncfs.TestFanOutOneBlockAsyncDFSOutput.setUp(TestFanOutOneBlockAsyncDFSOutput.java:87)
>>>>>>>>> 
>>>>>>>> 
>>>>>>>> This failure is not isolated to macOS. I ran this build on an Ubuntu
>>>>> VM
>>>>>>>> with the same AdoptOpenJDK 11.0.9.1. Why don't we see this in
>>>>> Jenkins?
>>>>>>>> 
>>>>>>>> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time
>>>>> elapsed:
>>>>>>>> 0.011 s <<< FAILURE! - in
>>>>>>>> org.apache.hadoop.hbase.regionserver.wal.TestAsyncProtobufLog
>>>>>>>> 
>>>>>>>> [ERROR]
>> org.apache.hadoop.hbase.regionserver.wal.TestAsyncProtobufLog
>>>>>>>> Time elapsed: 0.003 s  <<< ERROR!
>>>>>>>> 
>>>>>>>> java.lang.ExceptionInInitializerError
>>>>>>>> 
>>>>>>>>       at
>>>>>>>> 
>>>>>>> 
>>>>> 
>> org.apache.hadoop.hbase.regionserver.wal.TestAsyncProtobufLog.setUpBeforeClass(TestAsyncProtobufLog.java:56)
>>>>>>>> 
>>>>>>>>       Caused by: java.lang.IllegalArgumentException: Invalid Java
>>>>>>> version
>>>>>>>> 11.0.9.1
>>>>>>>>       at
>>>>>>>> 
>>>>>>> 
>>>>> 
>> org.apache.hadoop.hbase.regionserver.wal.TestAsyncProtobufLog.setUpBeforeClass(TestAsyncProtobufLog.java:56)
>>>>>>>> 
>>>>>>>> On Thu, Dec 3, 2020 at 4:05 PM Andrew Purtell <apurt...@apache.org>
>>>>>>> wrote:
>>>>>>>>> 
>>>>>>>>>> Please vote on this Apache hbase release candidate, hbase-2.4.0RC1
>>>>>>>>>> 
>>>>>>>>>> The VOTE will remain open for at least 72 hours.
>>>>>>>>>> 
>>>>>>>>>> [ ] +1 Release this package as Apache hbase 2.4.0
>>>>>>>>>> [ ] -1 Do not release this package because ...
>>>>>>>>>> 
>>>>>>>>>> The tag to be voted on is 2.4.0RC1:
>>>>>>>>>> 
>>>>>>>>>>   https://github.com/apache/hbase/tree/2.4.0RC1
>>>>>>>>>> 
>>>>>>>>>> The release files, including signatures, digests, as well as
>>>>>>> CHANGES.md
>>>>>>>>>> and RELEASENOTES.md included in this RC can be found at:
>>>>>>>>>> 
>>>>>>>>>>   https://dist.apache.org/repos/dist/dev/hbase/2.4.0RC1/
>>>>>>>>>> 
>>>>>>>>>> Customarily Maven artifacts would be available in a staging
>>>>>>> repository.
>>>>>>>>>> Unfortunately I was forced to terminate the Maven deploy step
>> after
>>>>>>>>>> the upload ran for more than four hours and my build equipment
>>>>>>>>>> needed to be relocated, with loss of network connectivity. This RC
>>>>> has
>>>>>>>>>> been delayed long enough. A temporary Maven repository is not a
>>>>>>>>>> requirement for a vote. I will retry Maven deploy tomorrow. I can
>>>>>>>>>> promise the artifacts for this RC will be staged in Apache Nexus
>>>>> and
>>>>>>>>>> ready for release well ahead of the earliest possible time this
>>>>> vote
>>>>>>>>>> can complete.
>>>>>>>>>> 
>>>>>>>>>> Artifacts were signed with the apurt...@apache.org key which can
>>>>> be
>>>>>>>>>> found
>>>>>>>>>> in:
>>>>>>>>>> 
>>>>>>>>>>   https://dist.apache.org/repos/dist/release/hbase/KEYS
>>>>>>>>>> 
>>>>>>>>>> The API compatibility report for this RC can be found at:
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>> 
>>>>> 
>> https://dist.apache.org/repos/dist/dev/hbase/2.4.0RC1/api_compare_2.4.0RC1_to_2.3.0.html
>>>>>>>>>> 
>>>>>>>>>> The changes are mostly added methods, which conform to the
>>>>>>> compatibility
>>>>>>>>>> guidelines for a new minor release. There is one change to the
>>>>> public
>>>>>>>>>> Region interface that alters the return type of a method. This is
>>>>>>>>>> equivalent to a removal then addition and can be a binary
>>>>>>> compatibility
>>>>>>>>>> problem. However to your RM's eye the change looks intentional and
>>>>> is
>>>>>>>>>> part of an API improvement project, and a compatibility method is
>>>>> not
>>>>>>>>>> possible here because Java doesn't consider return type when
>>>>> deciding
>>>>>>> if
>>>>>>>>>> one method signature duplicates another.
>>>>>>>>>> 
>>>>>>>>>> To learn more about Apache HBase, please see
>>>>>>>>>> 
>>>>>>>>>>   http://hbase.apache.org/
>>>>>>>>>> 
>>>>>>>>>> Thanks,
>>>>>>>>>> Your HBase Release Manager
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Best regards,
>>>>>> Andrew
>>>>>> 
>>>>>> Words like orphans lost among the crosstalk, meaning torn from truth's
>>>>>> decrepit hands
>>>>>>  - A23, Crosstalk
>>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Best regards,
>>>>> Andrew
>>>>> 
>>>>> Words like orphans lost among the crosstalk, meaning torn from truth's
>>>>> decrepit hands
>>>>>  - A23, Crosstalk
>>>>> 
>>>> 
>> 

Reply via email to