I got compilation error against ZooKeeper 3.5 due to HADOOP-18515.
It should be marked as incompatible change?
https://issues.apache.org/jira/browse/HADOOP-18515

::

  [ERROR] 
/home/rocky/srcs/bigtop/build/hadoop/rpm/BUILD/hadoop-3.3.6-src/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestZKFailoverControllerStress.java:[135,40]
 cannot find symbol
    symbol:   variable DisconnectReason
    location: class org.apache.zookeeper.server.ServerCnxn

While ZooKeeper 3.5 is already EoL, it would be nice to keep compatibility in a 
patch release
especially if only test code is the cause.

Thanks,
Masatake Iwasaki

On 2023/06/18 4:57, Wei-Chiu Chuang wrote:
I was going to do another RC in case something comes up.
But it looks like the only thing that needs to be fixed is the Changelog.


    1. HADOOP-18596 <https://issues.apache.org/jira/browse/HADOOP-18596>

HADOOP-18633 <https://issues.apache.org/jira/browse/HADOOP-18633>
are related to cloud store semantics, and I don't want to make a judgement
call on it. As far as I can tell its effect can be addressed by supplying a
config option in the application code.
It looks like the feature improves fault tolerance by ensuring files are
synchronized if modification time is different between the source and
destination. So to me it's the better behavior.

I can make a RC1 over the weekend to fix the Changelog but that's probably
the only thing that's going to have.
On Sat, Jun 17, 2023 at 2:00 AM Xiaoqiao He <hexiaoq...@apache.org> wrote:

Thanks Wei-Chiu for driving this release. The next RC will be prepared,
right?
If true, I would like to try and vote on the next RC.
Just notice that some JIRAs are not included and need to revert some PRs to
pass HBase verification which are mentioned above.

Best Regards,
- He Xiaoqiao


On Fri, Jun 16, 2023 at 9:20 AM Wei-Chiu Chuang
<weic...@cloudera.com.invalid> wrote:

Overall so far so good.

hadoop-api-shim:
built, tested successfully.

cloudstore:
built successfully.

Spark:
built successfully. Passed hadoop-cloud tests.

Ozone:
One test failure due to unrelated Ozone issue. This test is being
disabled
in the latest Ozone code.

org.apache.hadoop.hdds.utils.NativeLibraryNotLoadedException: Unable
to load library ozone_rocksdb_tools from both java.library.path &
resource file libozone_rocksdb_t
ools.so from jar.
         at

org.apache.hadoop.hdds.utils.db.managed.ManagedSSTDumpTool.<init>(ManagedSSTDumpTool.java:49)


Google gcs:
There are two test failures. The tests were added recently by
HADOOP-18724
<https://issues.apache.org/jira/browse/HADOOP-18724> in Hadoop 3.3.6.
This
is okay. Not production code problem. Can be addressed in GCS code.

[ERROR] Errors:
[ERROR]


TestInMemoryGoogleContractOpen>AbstractContractOpenTest.testFloatingPointLength:403
» IllegalArgument Unknown mandatory key for gs://fake-in-memory-test-buck
et/contract-test/testFloatingPointLength "fs.option.openfile.length"
[ERROR]


TestInMemoryGoogleContractOpen>AbstractContractOpenTest.testOpenFileApplyAsyncRead:341
» IllegalArgument Unknown mandatory key for gs://fake-in-memory-test-b
ucket/contract-test/testOpenFileApplyAsyncRead
"fs.option.openfile.length"





On Wed, Jun 14, 2023 at 5:01 PM Wei-Chiu Chuang <weic...@apache.org>
wrote:

The hbase-filesystem tests passed after reverting HADOOP-18596
<https://issues.apache.org/jira/browse/HADOOP-18596> and HADOOP-18633
<https://issues.apache.org/jira/browse/HADOOP-18633> from my local
tree.
So I think it's a matter of the default behavior being changed. It's
not
the end of the world. I think we can address it by adding an
incompatible
change flag and a release note.

On Wed, Jun 14, 2023 at 3:55 PM Wei-Chiu Chuang <weic...@apache.org>
wrote:

Cross referenced git history and jira. Changelog needs some update

Not in the release

    1. HDFS-16858 <https://issues.apache.org/jira/browse/HDFS-16858>


    1. HADOOP-18532 <
https://issues.apache.org/jira/browse/HADOOP-18532>
    2.
       1. HDFS-16861 <https://issues.apache.org/jira/browse/HDFS-16861

          2.
             1. HDFS-16866
             <https://issues.apache.org/jira/browse/HDFS-16866>
             2.
                1. HADOOP-18320
                <https://issues.apache.org/jira/browse/HADOOP-18320>
                2.

Updated fixed version. Will generate. new Changelog in the next RC.

Was able to build HBase and hbase-filesystem without any code change.

hbase has one unit test failure. This one is reproducible even with
Hadoop 3.3.5, so maybe a red herring. Local env or something.

[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time
elapsed:
9.007 s <<< FAILURE! - in
org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker
[ERROR]


org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker.testConcurrentIncludeTimestampCorrectness
  Time elapsed: 3.13 s  <<< ERROR!
java.lang.OutOfMemoryError: Java heap space
at


org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker$RandomTestData.<init>(TestSyncTimeRangeTracker.java:91)
at


org.apache.hadoop.hbase.regionserver.TestSyncTimeRangeTracker.testConcurrentIncludeTimestampCorrectness(TestSyncTimeRangeTracker.java:156)

hbase-filesystem has three test failures in TestHBOSSContractDistCp,
and
is not reproducible with Hadoop 3.3.5.
[ERROR] Failures: [ERROR]


TestHBOSSContractDistCp>AbstractContractDistCpTest.testDistCpUpdateCheckFileSkip:976->Assert.fail:88
10 errors in file of length 10
[ERROR]


TestHBOSSContractDistCp>AbstractContractDistCpTest.testUpdateDeepDirectoryStructureNoChange:270->AbstractContractDistCpTest.assertCounterInRange:290->Assert.assertTrue:41->Assert.fail:88
Files Skipped value 0 too below minimum 1
[ERROR]


TestHBOSSContractDistCp>AbstractContractDistCpTest.testUpdateDeepDirectoryStructureToRemote:259->AbstractContractDistCpTest.distCpUpdateDeepDirectoryStructure:334->AbstractContractDistCpTest.assertCounterInRange:294->Assert.assertTrue:41->Assert.fail:88
Files Copied value 2 above maximum 1
[INFO]
[ERROR] Tests run: 240, Failures: 3, Errors: 0, Skipped: 58


Ozone
test in progress. Will report back.


On Tue, Jun 13, 2023 at 11:27 PM Wei-Chiu Chuang <weic...@apache.org>
wrote:

I am inviting anyone to try and vote on this release candidate.

Note:
This is built off branch-3.3.6 plus PR#5741 (aws sdk update) and
PR#5740
(LICENSE file update)

The RC is available at:
https://home.apache.org/~weichiu/hadoop-3.3.6-RC0-amd64/ (for amd64)
https://home.apache.org/~weichiu/hadoop-3.3.6-RC0-arm64/ (for arm64)

Git tag: release-3.3.6-RC0
https://github.com/apache/hadoop/releases/tag/release-3.3.6-RC0

Maven artifacts is built by x86 machine and are staged at

https://repository.apache.org/content/repositories/orgapachehadoop-1378/

My public key:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

Changelog:
https://home.apache.org/~weichiu/hadoop-3.3.6-RC0-amd64/CHANGELOG.md

Release notes:

https://home.apache.org/~weichiu/hadoop-3.3.6-RC0-amd64/RELEASENOTES.md

This is a relatively small release (by Hadoop standard) containing
about
120 commits.
Please give it a try, this RC vote will run for 7 days.


Feature highlights:

SBOM artifacts
----------------------------------------
Starting from this release, Hadoop publishes Software Bill of
Materials
(SBOM) using
CycloneDX Maven plugin. For more information about SBOM, please go to
[SBOM](https://cwiki.apache.org/confluence/display/COMDEV/SBOM).

HDFS RBF: RDBMS based token storage support
----------------------------------------
HDFS Router-Router Based Federation now supports storing delegation
tokens on MySQL,
[HADOOP-18535](https://issues.apache.org/jira/browse/HADOOP-18535)
which improves token operation through over the original
Zookeeper-based
implementation.


New File System APIs
----------------------------------------
[HADOOP-18671](https://issues.apache.org/jira/browse/HADOOP-18671)
moved a number of
HDFS-specific APIs to Hadoop Common to make it possible for certain
applications that
depend on HDFS semantics to run on other Hadoop compatible file
systems.

In particular, recoverLease() and isFileClosed() are exposed through
LeaseRecoverable
interface. While setSafeMode() is exposed through SafeMode interface.







---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to