+0.

- Built successfully from tag release-3.3.1-RC3 on Ubuntu 20.04
- Verified signature and checksum
- Deployed pseudo-distributed cluster with 3 nodes
- Ran basic HDFS shell commands and sample MR Jobs, it worked well.
- Browsed NN/DN/RM/NM UI. *YARN UI2 DOES NOT work from my side.*

NOTE: attach configuration for YARN. Please correct me if I missed
something.

Thanks Wei-Chiu for your great work!

- He Xiaoqiao


On Mon, Jun 14, 2021 at 12:28 PM Takanobu Asanuma <tasan...@apache.org>
wrote:

> +1.
>  - Verified hashes
>  - Confirmed native build on CentOS7
>  - Started kerberized cluster (using docker)
>  - Checked NN/RBF Web UI
>  - Ran basic Erasure Coding shell commands
>
> Thanks for the great work, Wei-Chiu.
>
> - Takanobu
>
> 2021年6月13日(日) 3:25 Vinayakumar B <vinayakum...@apache.org>:
>
> > +1 (Binding)
> >
> > 1. Built from Tag.
> > 2. Successful Native Build on Ubuntu 20.04
> > 3. Verified Checksums
> > 4. Deployed the docker cluster with 3 nodes
> > 5. Ran sample MR Jobs
> >
> > -Vinay
> >
> >
> > On Sat, Jun 12, 2021 at 6:40 PM Ayush Saxena <ayush...@gmail.com> wrote:
> >
> > > +1,
> > > Built from Source.
> > > Successful Native Build on Ubuntu 20.04
> > > Verified Checksums
> > > Ran basic hdfs shell commands.
> > > Ran simple MR jobs.
> > > Browsed NN,DN,RM and NM UI.
> > >
> > > Thanx Wei-Chiu for driving the release.
> > >
> > > -Ayush
> > >
> > >
> > > > On 12-Jun-2021, at 1:45 AM, epa...@apache.org wrote:
> > > >
> > > > +1 (binding)
> > > > Eric
> > > >
> > > >
> > > > On Tuesday, June 1, 2021, 5:29:49 AM CDT, Wei-Chiu Chuang <
> > > weic...@apache.org> wrote:
> > > >
> > > > Hi community,
> > > >
> > > > This is the release candidate RC3 of Apache Hadoop 3.3.1 line. All
> > > blocker
> > > > issues have been resolved [1] again.
> > > >
> > > > There are 2 additional issues resolved for RC3:
> > > > * Revert "MAPREDUCE-7303. Fix TestJobResourceUploader failures after
> > > > HADOOP-16878
> > > > * Revert "HADOOP-16878. FileUtil.copy() to throw IOException if the
> > > source
> > > > and destination are the same
> > > >
> > > > There are 4 issues resolved for RC2:
> > > > * HADOOP-17666. Update LICENSE for 3.3.1
> > > > * MAPREDUCE-7348. TestFrameworkUploader#testNativeIO fails. (#3053)
> > > > * Revert "HADOOP-17563. Update Bouncy Castle to 1.68. (#2740)"
> (#3055)
> > > > * HADOOP-17739. Use hadoop-thirdparty 1.1.1. (#3064)
> > > >
> > > > The Hadoop-thirdparty 1.1.1, as previously mentioned, contains two
> > extra
> > > > fixes compared to hadoop-thirdparty 1.1.0:
> > > > * HADOOP-17707. Remove jaeger document from site index.
> > > > * HADOOP-17730. Add back error_prone
> > > >
> > > > *RC tag is release-3.3.1-RC3
> > > > https://github.com/apache/hadoop/releases/tag/release-3.3.1-RC3
> > > >
> > > > *The RC3 artifacts are at*:
> > > > https://home.apache.org/~weichiu/hadoop-3.3.1-RC3/
> > > > ARM artifacts:
> https://home.apache.org/~weichiu/hadoop-3.3.1-RC3-arm/
> > > >
> > > > *The maven artifacts are hosted here:*
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1320/
> > > >
> > > > *My public key is available here:*
> > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > >
> > > >
> > > > Things I've verified:
> > > > * all blocker issues targeting 3.3.1 have been resolved.
> > > > * stable/evolving API changes between 3.3.0 and 3.3.1 are compatible.
> > > > * LICENSE and NOTICE files checked
> > > > * RELEASENOTES and CHANGELOG
> > > > * rat check passed.
> > > > * Built HBase master branch on top of Hadoop 3.3.1 RC2, ran unit
> tests.
> > > > * Built Ozone master on top fo Hadoop 3.3.1 RC2, ran unit tests.
> > > > * Extra: built 50 other open source projects on top of Hadoop 3.3.1
> > RC2.
> > > > Had to patch some of them due to commons-lang migration (Hadoop
> 3.2.0)
> > > and
> > > > dependency divergence. Issues are being identified but so far nothing
> > > > blocker for Hadoop itself.
> > > >
> > > > Please try the release and vote. The vote will run for 5 days.
> > > >
> > > > My +1 to start,
> > > >
> > > > [1] https://issues.apache.org/jira/issues/?filter=12350491
> > > > [2]
> > > >
> > >
> >
> https://github.com/apache/hadoop/compare/release-3.3.1-RC1...release-3.3.1-RC3
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> > >
> > >
> >
>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://localhost</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
<property>
  <name>hadoop.http.filter.initializers</name>
  <value>org.apache.hadoop.security.HttpCrossOriginFilterInitializer</value>
</property><property>
  <description>Enable/disable the cross-origin (CORS) filter.</description>
  <name>hadoop.http.cross-origin.enabled</name>
  <value>true</value>
</property><property>
  <description>Comma separated list of origins that are allowed for web
    services needing cross-origin (CORS) support. Wildcards (*) and patterns
    allowed</description>
  <name>hadoop.http.cross-origin.allowed-origins</name>
  <value>*</value>
</property><property>
  <description>Comma separated list of methods that are allowed for web
    services needing cross-origin (CORS) support.</description>
  <name>hadoop.http.cross-origin.allowed-methods</name>
  <value>GET,POST,HEAD</value>
</property><property>
  <description>Comma separated list of headers that are allowed for web
    services needing cross-origin (CORS) support.</description>
  <name>hadoop.http.cross-origin.allowed-headers</name>
  <value>X-Requested-With,Content-Type,Accept,Origin</value>
</property><property>
  <description>The number of seconds a pre-flighted request can be cached
    for web services needing cross-origin (CORS) support.</description>
  <name>hadoop.http.cross-origin.max-age</name>
  <value>1800</value>
</property>
</configuration>
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
  <name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
  <value>98.5</value>
</property>
<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>
<property>
  <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
  <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
  <name>yarn.webapp.ui2.enable</name>
  <value>true</value>
</property>
</configuration>
---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Reply via email to