[jira] [Reopened] (HDFS-16570) RBF: The router using MultipleDestinationMountTableResolver remove Multiple subcluster data under the mount point failed

2022-05-06 Thread Xiping Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiping Zhang reopened HDFS-16570:
-

> RBF: The router using MultipleDestinationMountTableResolver remove Multiple 
> subcluster data under the mount point failed
> 
>
> Key: HDFS-16570
> URL: https://issues.apache.org/jira/browse/HDFS-16570
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Xiping Zhang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Please look at the following example :
> hadoop>{color:#FF}hdfs dfsrouteradmin -add /home/data ns0,ns1 /home/data 
> -order RANDOM{color}
> Successfully removed mount point /home/data
> hadoop>{color:#FF}hdfs dfsrouteradmin -ls{color}
> Mount Table Entries:
> Source                    Destinations              Owner                     
> Group                     Mode       Quota/Usage
> /home/data                ns0->/home/data,ns1->/home/data  zhangxiping        
>        Administrators            rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]
> hadoop>{color:#FF}hdfs dfs -touch 
> hdfs://ns0/home/data/test/fileNs0.txt{color}
> hadoop>{color:#FF}hdfs dfs -touch 
> hdfs://ns1/home/data/test/fileNs1.txt{color}
> hadoop>{color:#FF}hdfs dfs -ls 
> hdfs://ns0/home/data/test/fileNs0.txt{color}
> {-}rw-r{-}{-}r{-}-   3 zhangxiping supergroup          0 2022-05-06 18:01 
> hdfs://ns0/home/data/test/fileNs0.txt
> hadoop>{color:#FF}hdfs dfs -ls 
> hdfs://ns1/home/data/test/fileNs1.txt{color}
> {-}rw-r{-}{-}r{-}-   3 zhangxiping supergroup          0 2022-05-06 18:01 
> hdfs://ns1/home/data/test/fileNs1.txt
> hadoop>{color:#FF}hdfs dfs -ls 
> hdfs://127.0.0.1:40250/home/data/test{color}
> Found 2 items
> {-}rw-r{-}{-}r{-}-   3 zhangxiping supergroup          0 2022-05-06 18:01 
> hdfs://127.0.0.1:40250/home/data/test/fileNs0.txt
> {-}rw-r{-}{-}r{-}-   3 zhangxiping supergroup          0 2022-05-06 18:01 
> hdfs://127.0.0.1:40250/home/data/test/fileNs1.txt
> hadoop>{color:#FF}hdfs dfs -rm -r 
> hdfs://127.0.0.1:40250/home/data/test{color}
> rm: Failed to move to trash: hdfs://127.0.0.1:40250/home/data/test: rename 
> destination parent /user/zhangxiping/.Trash/Current/home/data/test not found.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.3

2022-05-06 Thread Ayush Saxena
Hmm, I see the artifacts ideally should have got overwritten by the new RC, but 
they didn’t. The reason seems like the staging path shared doesn’t have any 
jars…
That is why it was picking the old jars. I think Steve needs to run mvn deploy 
again…

Sent from my iPhone

> On 07-May-2022, at 7:12 AM, Chao Sun  wrote:
> 
> 
>> 
>> Chao can you use the one that Steve mentioned in the mail?
> 
> Hmm how do I do that? Typically after closing the RC in nexus the
> release bits will show up in
> https://repository.apache.org/content/repositories/staging/org/apache/hadoop
> and Spark build will be able to pick them up for testing. However in
> this case I don't see any 3.3.3 jars in the URL.
> 
>> On Fri, May 6, 2022 at 6:24 PM Ayush Saxena  wrote:
>> 
>> There were two 3.3.3 staged. The earlier one was with skipShade, the date 
>> was also april 22, I archived that. Chao can you use the one that Steve 
>> mentioned in the mail?
>> 
>>> On Sat, 7 May 2022 at 06:18, Chao Sun  wrote:
>>> 
>>> Seems there are some issues with the shaded client as I was not able
>>> to compile Apache Spark with the RC
>>> (https://github.com/apache/spark/pull/36474). Looks like it's compiled
>>> with the `-DskipShade` option and the hadoop-client-api JAR doesn't
>>> contain any class:
>>> 
>>> ➜  hadoop-client-api jar tf 3.3.3/hadoop-client-api-3.3.3.jar
>>> META-INF/
>>> META-INF/MANIFEST.MF
>>> META-INF/NOTICE.txt
>>> META-INF/LICENSE.txt
>>> META-INF/maven/
>>> META-INF/maven/org.apache.hadoop/
>>> META-INF/maven/org.apache.hadoop/hadoop-client-api/
>>> META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.xml
>>> META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.properties
>>> 
>>> On Fri, May 6, 2022 at 4:24 PM Stack  wrote:
 
 +1 (binding)
 
  * Signature: ok
  * Checksum : passed
  * Rat check (1.8.0_191): passed
   - mvn clean apache-rat:check
  * Built from source (1.8.0_191): failed
   - mvn clean install  -DskipTests
   - mvn -fae --no-transfer-progress -DskipTests -Dmaven.javadoc.skip=true
 -Pnative -Drequire.openssl -Drequire.snappy -Drequire.valgrind
 -Drequire.zstd -Drequire.test.libhadoop clean install
  * Unit tests pass (1.8.0_191):
- HDFS Tests passed (Didn't run more than this).
 
 Deployed a ten node ha hdfs cluster with three namenodes and five
 journalnodes. Ran a ten node hbase (older version of 2.5 branch built
 against 3.3.2) against it. Tried a small verification job. Good. Ran a
 bigger job with mild chaos. All seems to be working properly (recoveries,
 logs look fine). Killed a namenode. Failover worked promptly. UIs look
 good. Poked at the hdfs cli. Seems good.
 
 S
 
 On Tue, May 3, 2022 at 4:24 AM Steve Loughran 
 wrote:
 
> I have put together a release candidate (rc0) for Hadoop 3.3.3
> 
> The RC is available at:
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/
> 
> The git tag is release-3.3.3-RC0, commit d37586cbda3
> 
> The maven artifacts are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1348/
> 
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
> Change log
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/CHANGELOG.md
> 
> Release notes
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/RELEASENOTES.md
> 
> There's a very small number of changes, primarily critical code/packaging
> issues and security fixes.
> 
> 
>   - The critical fixes which shipped in the 3.2.3 release.
>   -  CVEs in our code and dependencies
>   - Shaded client packaging issues.
>   - A switch from log4j to reload4j
> 
> 
> reload4j is an active fork of the log4j 1.17 library with the classes 
> which
> contain CVEs removed. Even though hadoop never used those classes, they
> regularly raised alerts on security scans and concen from users. Switching
> to the forked project allows us to ship a secure logging framework. It 
> will
> complicate the builds of downstream maven/ivy/gradle projects which 
> exclude
> our log4j artifacts, as they need to cut the new dependency instead/as
> well.
> 
> See the release notes for details.
> 
> This is my first release through the new docker build process, do please
> validate artifact signing  to make sure it is good. I'll be trying 
> builds
> of downstream projects.
> 
> We know there are some outstanding issues with at least one library we are
> shipping (okhttp), but I don't want to hold this release up for it. If the
> docker based release process works smoothly enough we can do a followup
> security release in a few weeks.
> 
> Please try the release and vote. The vote will run for 5 days.
> 
> -Steve
> 
>>> 
>>> 

Re: [VOTE] Release Apache Hadoop 3.3.3

2022-05-06 Thread Chao Sun
> Chao can you use the one that Steve mentioned in the mail?

Hmm how do I do that? Typically after closing the RC in nexus the
release bits will show up in
https://repository.apache.org/content/repositories/staging/org/apache/hadoop
and Spark build will be able to pick them up for testing. However in
this case I don't see any 3.3.3 jars in the URL.

On Fri, May 6, 2022 at 6:24 PM Ayush Saxena  wrote:
>
> There were two 3.3.3 staged. The earlier one was with skipShade, the date was 
> also april 22, I archived that. Chao can you use the one that Steve mentioned 
> in the mail?
>
> On Sat, 7 May 2022 at 06:18, Chao Sun  wrote:
>>
>> Seems there are some issues with the shaded client as I was not able
>> to compile Apache Spark with the RC
>> (https://github.com/apache/spark/pull/36474). Looks like it's compiled
>> with the `-DskipShade` option and the hadoop-client-api JAR doesn't
>> contain any class:
>>
>> ➜  hadoop-client-api jar tf 3.3.3/hadoop-client-api-3.3.3.jar
>> META-INF/
>> META-INF/MANIFEST.MF
>> META-INF/NOTICE.txt
>> META-INF/LICENSE.txt
>> META-INF/maven/
>> META-INF/maven/org.apache.hadoop/
>> META-INF/maven/org.apache.hadoop/hadoop-client-api/
>> META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.xml
>> META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.properties
>>
>> On Fri, May 6, 2022 at 4:24 PM Stack  wrote:
>> >
>> > +1 (binding)
>> >
>> >   * Signature: ok
>> >   * Checksum : passed
>> >   * Rat check (1.8.0_191): passed
>> >- mvn clean apache-rat:check
>> >   * Built from source (1.8.0_191): failed
>> >- mvn clean install  -DskipTests
>> >- mvn -fae --no-transfer-progress -DskipTests -Dmaven.javadoc.skip=true
>> > -Pnative -Drequire.openssl -Drequire.snappy -Drequire.valgrind
>> > -Drequire.zstd -Drequire.test.libhadoop clean install
>> >   * Unit tests pass (1.8.0_191):
>> > - HDFS Tests passed (Didn't run more than this).
>> >
>> > Deployed a ten node ha hdfs cluster with three namenodes and five
>> > journalnodes. Ran a ten node hbase (older version of 2.5 branch built
>> > against 3.3.2) against it. Tried a small verification job. Good. Ran a
>> > bigger job with mild chaos. All seems to be working properly (recoveries,
>> > logs look fine). Killed a namenode. Failover worked promptly. UIs look
>> > good. Poked at the hdfs cli. Seems good.
>> >
>> > S
>> >
>> > On Tue, May 3, 2022 at 4:24 AM Steve Loughran 
>> > wrote:
>> >
>> > > I have put together a release candidate (rc0) for Hadoop 3.3.3
>> > >
>> > > The RC is available at:
>> > > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/
>> > >
>> > > The git tag is release-3.3.3-RC0, commit d37586cbda3
>> > >
>> > > The maven artifacts are staged at
>> > > https://repository.apache.org/content/repositories/orgapachehadoop-1348/
>> > >
>> > > You can find my public key at:
>> > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>> > >
>> > > Change log
>> > > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/CHANGELOG.md
>> > >
>> > > Release notes
>> > > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/RELEASENOTES.md
>> > >
>> > > There's a very small number of changes, primarily critical code/packaging
>> > > issues and security fixes.
>> > >
>> > >
>> > >- The critical fixes which shipped in the 3.2.3 release.
>> > >-  CVEs in our code and dependencies
>> > >- Shaded client packaging issues.
>> > >- A switch from log4j to reload4j
>> > >
>> > >
>> > > reload4j is an active fork of the log4j 1.17 library with the classes 
>> > > which
>> > > contain CVEs removed. Even though hadoop never used those classes, they
>> > > regularly raised alerts on security scans and concen from users. 
>> > > Switching
>> > > to the forked project allows us to ship a secure logging framework. It 
>> > > will
>> > > complicate the builds of downstream maven/ivy/gradle projects which 
>> > > exclude
>> > > our log4j artifacts, as they need to cut the new dependency instead/as
>> > > well.
>> > >
>> > > See the release notes for details.
>> > >
>> > > This is my first release through the new docker build process, do please
>> > > validate artifact signing  to make sure it is good. I'll be trying 
>> > > builds
>> > > of downstream projects.
>> > >
>> > > We know there are some outstanding issues with at least one library we 
>> > > are
>> > > shipping (okhttp), but I don't want to hold this release up for it. If 
>> > > the
>> > > docker based release process works smoothly enough we can do a followup
>> > > security release in a few weeks.
>> > >
>> > > Please try the release and vote. The vote will run for 5 days.
>> > >
>> > > -Steve
>> > >
>>
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>

-
To unsubscribe, e-mail: 

Re: [VOTE] Release Apache Hadoop 3.3.3

2022-05-06 Thread Ayush Saxena
There were two 3.3.3 staged. The earlier one was with skipShade, the date
was also april 22, I archived that. Chao can you use the one that Steve
mentioned in the mail?

On Sat, 7 May 2022 at 06:18, Chao Sun  wrote:

> Seems there are some issues with the shaded client as I was not able
> to compile Apache Spark with the RC
> (https://github.com/apache/spark/pull/36474). Looks like it's compiled
> with the `-DskipShade` option and the hadoop-client-api JAR doesn't
> contain any class:
>
> ➜  hadoop-client-api jar tf 3.3.3/hadoop-client-api-3.3.3.jar
> META-INF/
> META-INF/MANIFEST.MF
> META-INF/NOTICE.txt
> META-INF/LICENSE.txt
> META-INF/maven/
> META-INF/maven/org.apache.hadoop/
> META-INF/maven/org.apache.hadoop/hadoop-client-api/
> META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.xml
> META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.properties
>
> On Fri, May 6, 2022 at 4:24 PM Stack  wrote:
> >
> > +1 (binding)
> >
> >   * Signature: ok
> >   * Checksum : passed
> >   * Rat check (1.8.0_191): passed
> >- mvn clean apache-rat:check
> >   * Built from source (1.8.0_191): failed
> >- mvn clean install  -DskipTests
> >- mvn -fae --no-transfer-progress -DskipTests
> -Dmaven.javadoc.skip=true
> > -Pnative -Drequire.openssl -Drequire.snappy -Drequire.valgrind
> > -Drequire.zstd -Drequire.test.libhadoop clean install
> >   * Unit tests pass (1.8.0_191):
> > - HDFS Tests passed (Didn't run more than this).
> >
> > Deployed a ten node ha hdfs cluster with three namenodes and five
> > journalnodes. Ran a ten node hbase (older version of 2.5 branch built
> > against 3.3.2) against it. Tried a small verification job. Good. Ran a
> > bigger job with mild chaos. All seems to be working properly (recoveries,
> > logs look fine). Killed a namenode. Failover worked promptly. UIs look
> > good. Poked at the hdfs cli. Seems good.
> >
> > S
> >
> > On Tue, May 3, 2022 at 4:24 AM Steve Loughran
> 
> > wrote:
> >
> > > I have put together a release candidate (rc0) for Hadoop 3.3.3
> > >
> > > The RC is available at:
> > > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/
> > >
> > > The git tag is release-3.3.3-RC0, commit d37586cbda3
> > >
> > > The maven artifacts are staged at
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1348/
> > >
> > > You can find my public key at:
> > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > >
> > > Change log
> > > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/CHANGELOG.md
> > >
> > > Release notes
> > >
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/RELEASENOTES.md
> > >
> > > There's a very small number of changes, primarily critical
> code/packaging
> > > issues and security fixes.
> > >
> > >
> > >- The critical fixes which shipped in the 3.2.3 release.
> > >-  CVEs in our code and dependencies
> > >- Shaded client packaging issues.
> > >- A switch from log4j to reload4j
> > >
> > >
> > > reload4j is an active fork of the log4j 1.17 library with the classes
> which
> > > contain CVEs removed. Even though hadoop never used those classes, they
> > > regularly raised alerts on security scans and concen from users.
> Switching
> > > to the forked project allows us to ship a secure logging framework. It
> will
> > > complicate the builds of downstream maven/ivy/gradle projects which
> exclude
> > > our log4j artifacts, as they need to cut the new dependency instead/as
> > > well.
> > >
> > > See the release notes for details.
> > >
> > > This is my first release through the new docker build process, do
> please
> > > validate artifact signing  to make sure it is good. I'll be trying
> builds
> > > of downstream projects.
> > >
> > > We know there are some outstanding issues with at least one library we
> are
> > > shipping (okhttp), but I don't want to hold this release up for it. If
> the
> > > docker based release process works smoothly enough we can do a followup
> > > security release in a few weeks.
> > >
> > > Please try the release and vote. The vote will run for 5 days.
> > >
> > > -Steve
> > >
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 3.3.3

2022-05-06 Thread Chao Sun
Seems there are some issues with the shaded client as I was not able
to compile Apache Spark with the RC
(https://github.com/apache/spark/pull/36474). Looks like it's compiled
with the `-DskipShade` option and the hadoop-client-api JAR doesn't
contain any class:

➜  hadoop-client-api jar tf 3.3.3/hadoop-client-api-3.3.3.jar
META-INF/
META-INF/MANIFEST.MF
META-INF/NOTICE.txt
META-INF/LICENSE.txt
META-INF/maven/
META-INF/maven/org.apache.hadoop/
META-INF/maven/org.apache.hadoop/hadoop-client-api/
META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.xml
META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.properties

On Fri, May 6, 2022 at 4:24 PM Stack  wrote:
>
> +1 (binding)
>
>   * Signature: ok
>   * Checksum : passed
>   * Rat check (1.8.0_191): passed
>- mvn clean apache-rat:check
>   * Built from source (1.8.0_191): failed
>- mvn clean install  -DskipTests
>- mvn -fae --no-transfer-progress -DskipTests -Dmaven.javadoc.skip=true
> -Pnative -Drequire.openssl -Drequire.snappy -Drequire.valgrind
> -Drequire.zstd -Drequire.test.libhadoop clean install
>   * Unit tests pass (1.8.0_191):
> - HDFS Tests passed (Didn't run more than this).
>
> Deployed a ten node ha hdfs cluster with three namenodes and five
> journalnodes. Ran a ten node hbase (older version of 2.5 branch built
> against 3.3.2) against it. Tried a small verification job. Good. Ran a
> bigger job with mild chaos. All seems to be working properly (recoveries,
> logs look fine). Killed a namenode. Failover worked promptly. UIs look
> good. Poked at the hdfs cli. Seems good.
>
> S
>
> On Tue, May 3, 2022 at 4:24 AM Steve Loughran 
> wrote:
>
> > I have put together a release candidate (rc0) for Hadoop 3.3.3
> >
> > The RC is available at:
> > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/
> >
> > The git tag is release-3.3.3-RC0, commit d37586cbda3
> >
> > The maven artifacts are staged at
> > https://repository.apache.org/content/repositories/orgapachehadoop-1348/
> >
> > You can find my public key at:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > Change log
> > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/CHANGELOG.md
> >
> > Release notes
> > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/RELEASENOTES.md
> >
> > There's a very small number of changes, primarily critical code/packaging
> > issues and security fixes.
> >
> >
> >- The critical fixes which shipped in the 3.2.3 release.
> >-  CVEs in our code and dependencies
> >- Shaded client packaging issues.
> >- A switch from log4j to reload4j
> >
> >
> > reload4j is an active fork of the log4j 1.17 library with the classes which
> > contain CVEs removed. Even though hadoop never used those classes, they
> > regularly raised alerts on security scans and concen from users. Switching
> > to the forked project allows us to ship a secure logging framework. It will
> > complicate the builds of downstream maven/ivy/gradle projects which exclude
> > our log4j artifacts, as they need to cut the new dependency instead/as
> > well.
> >
> > See the release notes for details.
> >
> > This is my first release through the new docker build process, do please
> > validate artifact signing  to make sure it is good. I'll be trying builds
> > of downstream projects.
> >
> > We know there are some outstanding issues with at least one library we are
> > shipping (okhttp), but I don't want to hold this release up for it. If the
> > docker based release process works smoothly enough we can do a followup
> > security release in a few weeks.
> >
> > Please try the release and vote. The vote will run for 5 days.
> >
> > -Steve
> >

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16570) RBF: The router using MultipleDestinationMountTableResolver remove Multiple subcluster data under the mount point failed

2022-05-06 Thread Xiping Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiping Zhang resolved HDFS-16570.
-
Resolution: Implemented

> RBF: The router using MultipleDestinationMountTableResolver remove Multiple 
> subcluster data under the mount point failed
> 
>
> Key: HDFS-16570
> URL: https://issues.apache.org/jira/browse/HDFS-16570
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Xiping Zhang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Please look at the following example :
> hadoop>{color:#FF}hdfs dfsrouteradmin -add /home/data ns0,ns1 /home/data 
> -order RANDOM{color}
> Successfully removed mount point /home/data
> hadoop>{color:#FF}hdfs dfsrouteradmin -ls{color}
> Mount Table Entries:
> Source                    Destinations              Owner                     
> Group                     Mode       Quota/Usage
> /home/data                ns0->/home/data,ns1->/home/data  zhangxiping        
>        Administrators            rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]
> hadoop>{color:#FF}hdfs dfs -touch 
> hdfs://ns0/home/data/test/fileNs0.txt{color}
> hadoop>{color:#FF}hdfs dfs -touch 
> hdfs://ns1/home/data/test/fileNs1.txt{color}
> hadoop>{color:#FF}hdfs dfs -ls 
> hdfs://ns0/home/data/test/fileNs0.txt{color}
> {-}rw-r{-}{-}r{-}-   3 zhangxiping supergroup          0 2022-05-06 18:01 
> hdfs://ns0/home/data/test/fileNs0.txt
> hadoop>{color:#FF}hdfs dfs -ls 
> hdfs://ns1/home/data/test/fileNs1.txt{color}
> {-}rw-r{-}{-}r{-}-   3 zhangxiping supergroup          0 2022-05-06 18:01 
> hdfs://ns1/home/data/test/fileNs1.txt
> hadoop>{color:#FF}hdfs dfs -ls 
> hdfs://127.0.0.1:40250/home/data/test{color}
> Found 2 items
> {-}rw-r{-}{-}r{-}-   3 zhangxiping supergroup          0 2022-05-06 18:01 
> hdfs://127.0.0.1:40250/home/data/test/fileNs0.txt
> {-}rw-r{-}{-}r{-}-   3 zhangxiping supergroup          0 2022-05-06 18:01 
> hdfs://127.0.0.1:40250/home/data/test/fileNs1.txt
> hadoop>{color:#FF}hdfs dfs -rm -r 
> hdfs://127.0.0.1:40250/home/data/test{color}
> rm: Failed to move to trash: hdfs://127.0.0.1:40250/home/data/test: rename 
> destination parent /user/zhangxiping/.Trash/Current/home/data/test not found.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.3

2022-05-06 Thread Stack
+1 (binding)

  * Signature: ok
  * Checksum : passed
  * Rat check (1.8.0_191): passed
   - mvn clean apache-rat:check
  * Built from source (1.8.0_191): failed
   - mvn clean install  -DskipTests
   - mvn -fae --no-transfer-progress -DskipTests -Dmaven.javadoc.skip=true
-Pnative -Drequire.openssl -Drequire.snappy -Drequire.valgrind
-Drequire.zstd -Drequire.test.libhadoop clean install
  * Unit tests pass (1.8.0_191):
- HDFS Tests passed (Didn't run more than this).

Deployed a ten node ha hdfs cluster with three namenodes and five
journalnodes. Ran a ten node hbase (older version of 2.5 branch built
against 3.3.2) against it. Tried a small verification job. Good. Ran a
bigger job with mild chaos. All seems to be working properly (recoveries,
logs look fine). Killed a namenode. Failover worked promptly. UIs look
good. Poked at the hdfs cli. Seems good.

S

On Tue, May 3, 2022 at 4:24 AM Steve Loughran 
wrote:

> I have put together a release candidate (rc0) for Hadoop 3.3.3
>
> The RC is available at:
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/
>
> The git tag is release-3.3.3-RC0, commit d37586cbda3
>
> The maven artifacts are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1348/
>
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Change log
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/CHANGELOG.md
>
> Release notes
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/RELEASENOTES.md
>
> There's a very small number of changes, primarily critical code/packaging
> issues and security fixes.
>
>
>- The critical fixes which shipped in the 3.2.3 release.
>-  CVEs in our code and dependencies
>- Shaded client packaging issues.
>- A switch from log4j to reload4j
>
>
> reload4j is an active fork of the log4j 1.17 library with the classes which
> contain CVEs removed. Even though hadoop never used those classes, they
> regularly raised alerts on security scans and concen from users. Switching
> to the forked project allows us to ship a secure logging framework. It will
> complicate the builds of downstream maven/ivy/gradle projects which exclude
> our log4j artifacts, as they need to cut the new dependency instead/as
> well.
>
> See the release notes for details.
>
> This is my first release through the new docker build process, do please
> validate artifact signing  to make sure it is good. I'll be trying builds
> of downstream projects.
>
> We know there are some outstanding issues with at least one library we are
> shipping (okhttp), but I don't want to hold this release up for it. If the
> docker based release process works smoothly enough we can do a followup
> security release in a few weeks.
>
> Please try the release and vote. The vote will run for 5 days.
>
> -Steve
>


Re: [VOTE] Release Apache Hadoop 3.3.3

2022-05-06 Thread Ayush Saxena
+1,
* Built from source
* Successful native build on Ubuntu 18.04
* Verified Checksums
(CHANGELOG.md,RELEASENOTES.md,hadoop-3.3.3-rat.txt,hadoop-3.3.3-site.tar.gz,hadoop-3.3.3-src.tar.gz,hadoop-3.3.3.tar.gz)
* Successful RAT check
* Ran some basic HDFS shell commands
* Ran some basic YARN shell commands
* Browsed through UI (NN ,DN, RM, NM & JHS)
* Tried some commands on Hive using Hive built on hive-master
* Verified Signature: Says Good Signature but "This key is not certified
with a trusted signature!"
* Ran some MR example jobs(TeraGen, TeraSort, TeraValidate, WordCount,
WordMean & Pi)
* Version & commit hash seems correct in UI as well as in hadoop version
output.
* Browsed through the ChangeLog & Release Notes (One place mentions hadoop
3.4.0 though, but we can survive I suppose)
* Browsed through the documentation.

Thanx Steve for driving the release, Good Luck!!!

-Ayush



On Tue, 3 May 2022 at 16:54, Steve Loughran 
wrote:

> I have put together a release candidate (rc0) for Hadoop 3.3.3
>
> The RC is available at:
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/
>
> The git tag is release-3.3.3-RC0, commit d37586cbda3
>
> The maven artifacts are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1348/
>
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Change log
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/CHANGELOG.md
>
> Release notes
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/RELEASENOTES.md
>
> There's a very small number of changes, primarily critical code/packaging
> issues and security fixes.
>
>
>- The critical fixes which shipped in the 3.2.3 release.
>-  CVEs in our code and dependencies
>- Shaded client packaging issues.
>- A switch from log4j to reload4j
>
>
> reload4j is an active fork of the log4j 1.17 library with the classes which
> contain CVEs removed. Even though hadoop never used those classes, they
> regularly raised alerts on security scans and concen from users. Switching
> to the forked project allows us to ship a secure logging framework. It will
> complicate the builds of downstream maven/ivy/gradle projects which exclude
> our log4j artifacts, as they need to cut the new dependency instead/as
> well.
>
> See the release notes for details.
>
> This is my first release through the new docker build process, do please
> validate artifact signing  to make sure it is good. I'll be trying builds
> of downstream projects.
>
> We know there are some outstanding issues with at least one library we are
> shipping (okhttp), but I don't want to hold this release up for it. If the
> docker based release process works smoothly enough we can do a followup
> security release in a few weeks.
>
> Please try the release and vote. The vote will run for 5 days.
>
> -Steve
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2022-05-06 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/

No changes




-1 overall


The following subsystems voted -1:
blanks pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMSWithZK 
   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.cli.TestHDFSCLI 
   
hadoop.yarn.server.timeline.security.TestTimelineAuthenticationFilterForV1 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServicesWithSSL 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   
hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher 
   
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokenAuthentication
 
   hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher 
   hadoop.yarn.server.resourcemanager.webapp.TestRMWebappAuthentication 
   hadoop.yarn.webapp.TestRMWithXFSFilter 
   hadoop.yarn.server.resourcemanager.TestRMHA 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.yarn.client.TestResourceManagerAdministrationProtocolPBClientImpl 
   hadoop.yarn.client.TestGetGroups 
   hadoop.mapred.TestLocalDistributedCacheManager 
   hadoop.hdfs.server.federation.security.TestRouterSecurityManager 
   hadoop.yarn.server.router.webapp.TestRouterWebServicesREST 
   hadoop.yarn.sls.nodemanager.TestNMSimulator 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.yarn.sls.TestSLSGenericSynth 
   hadoop.yarn.sls.TestSLSDagAMSimulator 
   hadoop.yarn.sls.TestSLSStreamAMSynth 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
   hadoop.yarn.sls.TestReservationSystemInvariants 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/artifact/out/results-compile-javac-root.txt
 [340K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/artifact/out/blanks-eol.txt
 [13M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/artifact/out/results-checkstyle-root.txt
 [14M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/artifact/out/results-javadoc-javadoc-root.txt
 [400K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
 [428K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 [576K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
 [96K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/861/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 [936K]
  

Re: [VOTE] Release Apache Hadoop 3.3.3

2022-05-06 Thread Viraj Jasani
+1 (non-binding),

With a minor change  in
hadoop-vote,

* Signature: ok
* Checksum : ok
* Rat check (1.8.0_301): ok
 - mvn clean apache-rat:check
* Built from source (1.8.0_301): ok
 - mvn clean install  -DskipTests
* Built tar from source (1.8.0_301): ok
 - mvn clean package  -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true

HDFS and MapReduce functional testing looks good.

As per PR#4268 , except for a
few flakes, TestDistributedShell and TestCsiClient are consistently failing.


On Tue, May 3, 2022 at 4:24 AM Steve Loughran 
wrote:

> I have put together a release candidate (rc0) for Hadoop 3.3.3
>
> The RC is available at:
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/
>
> The git tag is release-3.3.3-RC0, commit d37586cbda3
>
> The maven artifacts are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1348/
>
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Change log
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/CHANGELOG.md
>
> Release notes
> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/RELEASENOTES.md
>
> There's a very small number of changes, primarily critical code/packaging
> issues and security fixes.
>
>
>- The critical fixes which shipped in the 3.2.3 release.
>-  CVEs in our code and dependencies
>- Shaded client packaging issues.
>- A switch from log4j to reload4j
>
>
> reload4j is an active fork of the log4j 1.17 library with the classes which
> contain CVEs removed. Even though hadoop never used those classes, they
> regularly raised alerts on security scans and concen from users. Switching
> to the forked project allows us to ship a secure logging framework. It will
> complicate the builds of downstream maven/ivy/gradle projects which exclude
> our log4j artifacts, as they need to cut the new dependency instead/as
> well.
>
> See the release notes for details.
>
> This is my first release through the new docker build process, do please
> validate artifact signing  to make sure it is good. I'll be trying builds
> of downstream projects.
>
> We know there are some outstanding issues with at least one library we are
> shipping (okhttp), but I don't want to hold this release up for it. If the
> docker based release process works smoothly enough we can do a followup
> security release in a few weeks.
>
> Please try the release and vote. The vote will run for 5 days.
>
> -Steve
>


[jira] [Resolved] (HDFS-16520) Improve EC pread: avoid potential reading whole block

2022-05-06 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-16520.

Resolution: Fixed

Merged the PR and cherrypicked into branch-3.3.

Thanks!

> Improve EC pread: avoid potential reading whole block
> -
>
> Key: HDFS-16520
> URL: https://issues.apache.org/jira/browse/HDFS-16520
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient, ec, erasure-coding
>Affects Versions: 3.3.1, 3.3.2
>Reporter: daimin
>Assignee: daimin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> HDFS client 'pread' represents 'position read', this kind of read just need a 
> range of data instead of reading the whole file/block. By using 
> BlockReaderFactory#setLength, client tells datanode the block length to be 
> read from disk and sent to client.
> To EC file, the block length to read is not well set, by default using 
> 'block.getBlockSize() - offsetInBlock' to both pread and sread. Thus datanode 
> read much more data and send to client, and abort when client closes 
> connection. There is a lot waste of resource to this situation.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16571) ABFS: Two BlobCreated get triggered for writing one ABFS file

2022-05-06 Thread Bryan Chen (Jira)
Bryan Chen created HDFS-16571:
-

 Summary: ABFS: Two BlobCreated get triggered for writing one ABFS 
file
 Key: HDFS-16571
 URL: https://issues.apache.org/jira/browse/HDFS-16571
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 3.1.1
Reporter: Bryan Chen


Using the new ABFS driver to write a file on ADLS gen storage account triggers 
2 BlobCreated events in the Azure backend while we are expecting one.

Here is an example code snippet for creating a file (in scala):
{code:java}
import java.io._
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.{FileSystem, Path, RemoteIterator}
val conf = new Configuration()
val path = new 
Path("abfss://contai...@some-adls-account.dfs.core.windows.net/test.txt")
val fs = path.getFileSystem(conf)
val bs = new BufferedOutputStream(fs.create(path, true))
bs.write("test".getBytes("UTF-8"))
bs.close() {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Enabling all platform builds in CI for all Hadoop PRs

2022-05-06 Thread Ayush Saxena
>From functional point of view it does makes sense to validate all the
platforms as part of the builds, but the Pre commits builds taking time is
now no longer a small things, In past one year or may be two, we have
already increased it more than twice as compared to what it was before, If
someone has a change in HDFS, which includes both hdfs-client &
hadoop-hdfs, it takes more than 5 hours, which long back was around 2 hours.
With the current state, I don't think we should just go and add these extra
overheads. Having them as part of the nightly builds does makes sense for
now.

In future if we feel there is a strong need for this and we start to see
very frequent failures in some other platforms and we are left with no
other option but to integrate it in our pre-commit jobs, we can explore
having these build phases running in parallel, along with trying other
phases also to run in parallel like compilation/javadoc builds of JDK-8 &
JDK-11 can run in parallel and may be explore other opportunities as well
to compensate for this time.

For now lets just integrate it our nightly builds only and circle back
again here in future if the need be.

-Ayush

On Fri, 6 May 2022 at 20:44, Wei-Chiu Chuang  wrote:

> Running builds for all platforms for each and every PR seems too excessive.
>
> How about doing all platform builds in the nightly jobs?
>
> On Fri, May 6, 2022 at 8:02 AM Steve Loughran  >
> wrote:
>
> > I'm not enthusiastic here as it not only makes the builds slower, it
> > reduces the #of builds we can through a day
> >
> > one thing I am wondering is could we remove java8 support on some
> branches?
> >
> > make branch 3.3.2.x (i.e the 3.3.3 release) the last java 8 build, and
> this
> > summers branch-3.3 release (which I'd rebadge 3.4) would ship as java 11
> > only.
> > that would cut buld and test time for those trunk PRs in half...after
> which
> > the preospect of building on more than one platform becomes more viable.
> >
> > On Thu, 5 May 2022 at 15:34, Gautham Banasandra 
> > wrote:
> >
> > > Hi Hadoop devs,
> > >
> > > Last week, there was a Hadoop build failure on Debian 10 caused by
> > > https://github.com/apache/hadoop/pull/3988. In dev-support/jenkins.sh,
> > > there's the capability to build and test Hadoop across the supported
> > > platforms. Currently, we're limiting this only for those PRs having
> only
> > > C/C++ changes[1], since C/C++ changes are more likely to cause
> > > cross-platform build issues and bypassing the full platform build for
> non
> > > C/C++ PRs would save a great deal of CI time. However, the build
> failure
> > > caused by PR #3988 motivates me to enable the capability to build and
> > > test Hadoop for all the supported platforms for ALL the PRs.
> > >
> > > While this may cause longer CI run duration for each PR, it would
> > > immensely minimize the risk of breaking Hadoop across platforms and
> > > saves us a lot of debugging time. Kindly post your opinion regarding
> this
> > > and I'll move to enable this capability for all PRs if the response is
> > > sufficiently positive.
> > >
> > > [1] =
> > >
> > >
> >
> https://github.com/apache/hadoop/blob/bccf2f3ef4c8f09f010656f9061a4e323daf132b/dev-support/jenkins.sh#L97-L103
> > >
> > >
> > > Thanks,
> > > --Gautham
> > >
> >
>


Re: [DISCUSS] Enabling all platform builds in CI for all Hadoop PRs

2022-05-06 Thread Wei-Chiu Chuang
Running builds for all platforms for each and every PR seems too excessive.

How about doing all platform builds in the nightly jobs?

On Fri, May 6, 2022 at 8:02 AM Steve Loughran 
wrote:

> I'm not enthusiastic here as it not only makes the builds slower, it
> reduces the #of builds we can through a day
>
> one thing I am wondering is could we remove java8 support on some branches?
>
> make branch 3.3.2.x (i.e the 3.3.3 release) the last java 8 build, and this
> summers branch-3.3 release (which I'd rebadge 3.4) would ship as java 11
> only.
> that would cut buld and test time for those trunk PRs in half...after which
> the preospect of building on more than one platform becomes more viable.
>
> On Thu, 5 May 2022 at 15:34, Gautham Banasandra 
> wrote:
>
> > Hi Hadoop devs,
> >
> > Last week, there was a Hadoop build failure on Debian 10 caused by
> > https://github.com/apache/hadoop/pull/3988. In dev-support/jenkins.sh,
> > there's the capability to build and test Hadoop across the supported
> > platforms. Currently, we're limiting this only for those PRs having only
> > C/C++ changes[1], since C/C++ changes are more likely to cause
> > cross-platform build issues and bypassing the full platform build for non
> > C/C++ PRs would save a great deal of CI time. However, the build failure
> > caused by PR #3988 motivates me to enable the capability to build and
> > test Hadoop for all the supported platforms for ALL the PRs.
> >
> > While this may cause longer CI run duration for each PR, it would
> > immensely minimize the risk of breaking Hadoop across platforms and
> > saves us a lot of debugging time. Kindly post your opinion regarding this
> > and I'll move to enable this capability for all PRs if the response is
> > sufficiently positive.
> >
> > [1] =
> >
> >
> https://github.com/apache/hadoop/blob/bccf2f3ef4c8f09f010656f9061a4e323daf132b/dev-support/jenkins.sh#L97-L103
> >
> >
> > Thanks,
> > --Gautham
> >
>


Re: [DISCUSS] Enabling all platform builds in CI for all Hadoop PRs

2022-05-06 Thread Steve Loughran
I'm not enthusiastic here as it not only makes the builds slower, it
reduces the #of builds we can through a day

one thing I am wondering is could we remove java8 support on some branches?

make branch 3.3.2.x (i.e the 3.3.3 release) the last java 8 build, and this
summers branch-3.3 release (which I'd rebadge 3.4) would ship as java 11
only.
that would cut buld and test time for those trunk PRs in half...after which
the preospect of building on more than one platform becomes more viable.

On Thu, 5 May 2022 at 15:34, Gautham Banasandra  wrote:

> Hi Hadoop devs,
>
> Last week, there was a Hadoop build failure on Debian 10 caused by
> https://github.com/apache/hadoop/pull/3988. In dev-support/jenkins.sh,
> there's the capability to build and test Hadoop across the supported
> platforms. Currently, we're limiting this only for those PRs having only
> C/C++ changes[1], since C/C++ changes are more likely to cause
> cross-platform build issues and bypassing the full platform build for non
> C/C++ PRs would save a great deal of CI time. However, the build failure
> caused by PR #3988 motivates me to enable the capability to build and
> test Hadoop for all the supported platforms for ALL the PRs.
>
> While this may cause longer CI run duration for each PR, it would
> immensely minimize the risk of breaking Hadoop across platforms and
> saves us a lot of debugging time. Kindly post your opinion regarding this
> and I'll move to enable this capability for all PRs if the response is
> sufficiently positive.
>
> [1] =
>
> https://github.com/apache/hadoop/blob/bccf2f3ef4c8f09f010656f9061a4e323daf132b/dev-support/jenkins.sh#L97-L103
>
>
> Thanks,
> --Gautham
>


Re: [VOTE] Release Apache Hadoop 3.3.3

2022-05-06 Thread Masatake Iwasaki

+1 (binding)

* verified signature and checksum of the source tarball.
* built the source code on Rocky Linux 8 (x86_64) and OpenJDK 8
  by `mvn install -DskipTests -Pnative -Pdist`.
* launched pseudo distributed cluster with Kerberos security enabled and ran 
sample MR jobs.
* launched HA enabled 3-nodes docker cluster and ran sample MR jobs.
* launched pseudo distributed cluster and `spark-shell --master yarn`
  with spark-3.2.1-bin-without-hadoop and ran some tutorial code.
* built site documentation by `mvn site site:stage -Preleasedocs` and skimmed 
the contents.

Thanks,
Masatake Iwasaki

On 2022/05/03 20:18, Steve Loughran wrote:

I have put together a release candidate (rc0) for Hadoop 3.3.3

The RC is available at:
https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/

The git tag is release-3.3.3-RC0, commit d37586cbda3

The maven artifacts are staged at
https://repository.apache.org/content/repositories/orgapachehadoop-1348/

You can find my public key at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

Change log
https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/CHANGELOG.md

Release notes
https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/RELEASENOTES.md

There's a very small number of changes, primarily critical code/packaging
issues and security fixes.


- The critical fixes which shipped in the 3.2.3 release.
-  CVEs in our code and dependencies
- Shaded client packaging issues.
- A switch from log4j to reload4j


reload4j is an active fork of the log4j 1.17 library with the classes which
contain CVEs removed. Even though hadoop never used those classes, they
regularly raised alerts on security scans and concen from users. Switching
to the forked project allows us to ship a secure logging framework. It will
complicate the builds of downstream maven/ivy/gradle projects which exclude
our log4j artifacts, as they need to cut the new dependency instead/as well.

See the release notes for details.

This is my first release through the new docker build process, do please
validate artifact signing  to make sure it is good. I'll be trying builds
of downstream projects.

We know there are some outstanding issues with at least one library we are
shipping (okhttp), but I don't want to hold this release up for it. If the
docker based release process works smoothly enough we can do a followup
security release in a few weeks.

Please try the release and vote. The vote will run for 5 days.

-Steve



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16570) RBF: The router using MultipleDestinationMountTableResolver remove Multiple subcluster data under the mount point failed

2022-05-06 Thread Xiping Zhang (Jira)
Xiping Zhang created HDFS-16570:
---

 Summary: RBF: The router using 
MultipleDestinationMountTableResolver remove Multiple subcluster data under the 
mount point failed
 Key: HDFS-16570
 URL: https://issues.apache.org/jira/browse/HDFS-16570
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Reporter: Xiping Zhang


hadoop>hdfs dfsrouteradmin -add /home/data ns0,ns1 /home/data -order RANDOM
Successfully removed mount point /home/data

hadoop>hdfs dfsrouteradmin -ls
Mount Table Entries:
Source                    Destinations              Owner                     
Group                     Mode       Quota/Usage
/home/data                ns0->/home/data,ns1->/home/data  zhangxiping          
     Administrators            rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]

hadoop>hdfs dfs -touch hdfs://ns0/home/data/test/fileNs0.txt

hadoop>hdfs dfs -touch hdfs://ns1/home/data/test/fileNs1.txt

hadoop>hdfs dfs -ls hdfs://ns0/home/data/test/fileNs0.txt
-rw-r--r--   3 zhangxiping supergroup          0 2022-05-06 18:01 
hdfs://ns0/home/data/test/fileNs0.txt

hadoop>hdfs dfs -ls hdfs://ns1/home/data/test/fileNs1.txt
-rw-r--r--   3 zhangxiping supergroup          0 2022-05-06 18:01 
hdfs://ns1/home/data/test/fileNs1.txt

hadoop>hdfs dfs -ls hdfs://127.0.0.1:40250/home/data/test
Found 2 items
-rw-r--r--   3 zhangxiping supergroup          0 2022-05-06 18:01 
hdfs://127.0.0.1:40250/home/data/test/fileNs0.txt
-rw-r--r--   3 zhangxiping supergroup          0 2022-05-06 18:01 
hdfs://127.0.0.1:40250/home/data/test/fileNs1.txt

hadoop>hdfs dfs -rm -r hdfs://127.0.0.1:40250/home/data/test
rm: Failed to move to trash: hdfs://127.0.0.1:40250/home/data/test: rename 
destination parent /user/zhangxiping/.Trash/Current/home/data/test not found.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2022-05-06 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.namenode.ha.TestHAAppend 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.tools.TestDistCpSystem 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/diff-compile-javac-root.txt
  [472K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/patch-mvnsite-root.txt
  [560K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/patch-javadoc-root.txt
  [40K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [216K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [428K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [112K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
  [44K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/653/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt
  [24K]