[jira] [Resolved] (HDFS-15590) namenode fails to start when ordered snapshot deletion feature is disabled

2020-09-24 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDFS-15590.

Resolution: Fixed

> namenode fails to start when ordered snapshot deletion feature is disabled
> --
>
> Key: HDFS-15590
> URL: https://issues.apache.org/jira/browse/HDFS-15590
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code:java}
> 1. Enabled ordered deletion snapshot feature.
> 2. Created snapshottable directory - /user/hrt_6/atrr_dir1
> 3. Created snapshots s0, s1, s2.
> 4. Deleted snapshot s2
> 5. Delete snapshot s0, s1, s2 again
> 6. Disable ordered deletion snapshot feature
> 5. Restart Namenode
> Failed to start namenode.
> org.apache.hadoop.hdfs.protocol.SnapshotException: Cannot delete snapshot s2 
> from path /user/hrt_6/atrr_dir2: the snapshot does not exist.
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.removeSnapshot(DirectorySnapshottableFeature.java:237)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.removeSnapshot(INodeDirectory.java:293)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:510)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:819)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-24 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDFS-15595.

Fix Version/s: 3.4.0
   Resolution: Fixed

> TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
> 
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.4.0
>
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Hui Fei is a new Apache Hadoop Committer

2020-09-24 Thread Surendra Singh Lilhore
Congratulations!


-Surendra

On Thu, 24 Sep, 2020, 12:22 pm Xiaoqiao He,  wrote:

> Congrats!
>
> Best Regards,
> He Xiaoqiao
>
> On Thu, Sep 24, 2020 at 10:03 AM Sammi Chen  wrote:
>
> > Congratulations to Hui !
> >
> > On Thu, Sep 24, 2020 at 2:07 AM Wei-Chiu Chuang 
> > wrote:
> >
> > > I am pleased to announce that Hui Fei has accepted the invitation to
> > become
> > > a Hadoop committer.
> > >
> > > He started contributing to the project in October 2016. Over the past 4
> > > years he has contributed a lot in HDFS, especially in Erasure Coding,
> > > Hadoop 3 upgrade, RBF and Standby Serving reads.
> > >
> > > One of the biggest contributions is Hadoop 2->3 rolling upgrade
> support.
> > > This was a major blocker for any existing Hadoop users to adopt Hadoop
> 3.
> > > The adoption of Hadoop 3 has gone up after this. In the past the
> > community
> > > discussed a lot about Hadoop 3 rolling upgrade being a must-have, but
> no
> > > one took the initiative to make it happen. I am personally very
> grateful
> > > for this.
> > >
> > > The work on EC is impressive as well. He managed to onboard EC in
> > > production at scale, fixing tricky problems. Again, I am impressed and
> > > grateful for the contribution in EC.
> > >
> > > In addition to code contributions, he invested a lot in the community:
> > >
> > > >
> > > >- Apache Hadoop Community 2019 Beijing Meetup
> > > >
> > >
> >
> https://blogs.apache.org/hadoop/entry/hadoop-community-meetup-beijing-aug
> > > where
> > > >he discussed the operational experience of RBF in production
> > > >
> > > >
> > > >- Apache Hadoop Storage Community Sync Online
> > > >
> > >
> >
> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit#heading=h.irqxw1iy16zo
> > > where
> > > >he discussed the Hadoop 3 rolling upgrade support
> > > >
> > > >
> > > Let's congratulate Hui for this new role!
> > >
> > > Cheers,
> > > Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)
> > >
> >
>


Re: [ANNOUNCE] Hui Fei is a new Apache Hadoop Committer

2020-09-24 Thread Adam Antal
Congratulations!

Regards,
Adam

On Thu, Sep 24, 2020 at 3:22 PM Surendra Singh Lilhore <
surendralilh...@apache.org> wrote:

> Congratulations!
>
>
> -Surendra
>
> On Thu, 24 Sep, 2020, 12:22 pm Xiaoqiao He,  wrote:
>
> > Congrats!
> >
> > Best Regards,
> > He Xiaoqiao
> >
> > On Thu, Sep 24, 2020 at 10:03 AM Sammi Chen 
> wrote:
> >
> > > Congratulations to Hui !
> > >
> > > On Thu, Sep 24, 2020 at 2:07 AM Wei-Chiu Chuang 
> > > wrote:
> > >
> > > > I am pleased to announce that Hui Fei has accepted the invitation to
> > > become
> > > > a Hadoop committer.
> > > >
> > > > He started contributing to the project in October 2016. Over the
> past 4
> > > > years he has contributed a lot in HDFS, especially in Erasure Coding,
> > > > Hadoop 3 upgrade, RBF and Standby Serving reads.
> > > >
> > > > One of the biggest contributions is Hadoop 2->3 rolling upgrade
> > support.
> > > > This was a major blocker for any existing Hadoop users to adopt
> Hadoop
> > 3.
> > > > The adoption of Hadoop 3 has gone up after this. In the past the
> > > community
> > > > discussed a lot about Hadoop 3 rolling upgrade being a must-have, but
> > no
> > > > one took the initiative to make it happen. I am personally very
> > grateful
> > > > for this.
> > > >
> > > > The work on EC is impressive as well. He managed to onboard EC in
> > > > production at scale, fixing tricky problems. Again, I am impressed
> and
> > > > grateful for the contribution in EC.
> > > >
> > > > In addition to code contributions, he invested a lot in the
> community:
> > > >
> > > > >
> > > > >- Apache Hadoop Community 2019 Beijing Meetup
> > > > >
> > > >
> > >
> >
> https://blogs.apache.org/hadoop/entry/hadoop-community-meetup-beijing-aug
> > > > where
> > > > >he discussed the operational experience of RBF in production
> > > > >
> > > > >
> > > > >- Apache Hadoop Storage Community Sync Online
> > > > >
> > > >
> > >
> >
> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit#heading=h.irqxw1iy16zo
> > > > where
> > > > >he discussed the Hadoop 3 rolling upgrade support
> > > > >
> > > > >
> > > > Let's congratulate Hui for this new role!
> > > >
> > > > Cheers,
> > > > Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)
> > > >
> > >
> >
>


Re: [ANNOUNCE] Lisheng Sun is a new Apache Hadoop Committer

2020-09-24 Thread Adam Antal
Congratulations!

Regards,
Adam

On Thu, Sep 24, 2020 at 1:32 PM Rudolf Reti 
wrote:

> Congrats! :)
>
> On Thu, Sep 24, 2020 at 9:48 AM runlin zhang  wrote:
>
> > Congratulations  Lisheng !
> >
> > > 在 2020年9月24日,上午2:00,Wei-Chiu Chuang  写道:
> > >
> > > I am pleased to announce that Lisheng Sun has accepted the invitation
> to
> > > become a Hadoop committer.
> > >
> > > Lisheng actively contributed to the project since July 2019, and he
> > > contributed two new features: Dead datanode detector (HDFS-13571
> > > ) and a new du
> > > implementation (HDFS-14313
> > > ) Lots of
> improvements
> > > including a number of short circuit read optimization
> > > HDFS-15161  ,
> > speeding up
> > > NN fsimage loading time: HDFS-13694
> > >  and HDFS-13693
> > > . Code wise, he
> > resolved
> > > 57 Hadoop jiras.
> > >
> > > Let's congratulate Lisheng for this new role!
> > >
> > > Cheers,
> > > Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)
> >
> >
> > -
> > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> >
> >
>
> --
> Rudolf Reti
> Engineering Manager, RM Team
> 
>


Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2020-09-24 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 
   
hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRecovery 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   
hadoop.yarn.server.resourcemanager.rmapp.attempt.TestRMAppAttemptTransitions 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapred.TestMRTimelineEventHandling 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   jshint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/diff-patch-jshint.txt
  [208K]

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/diff-compile-javac-root.txt
  [456K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/xml.txt
  [4.0K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [216K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [276K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/66/artifact/out/patch-unit-hadoop-hdfs-project_hadoo

[jira] [Resolved] (HDFS-15596) ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt) should not be restricted to DFS only.

2020-09-24 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-15596.

   Fix Version/s: 3.4.0
Hadoop Flags: Reviewed
Target Version/s: 3.3.1
  Resolution: Fixed

Thanks [~ayushtkn] for the review! Committed.

> ViewHDFS#create(f, permission, cflags, bufferSize, replication, blockSize, 
> progress, checksumOpt) should not be restricted to DFS only.
> ---
>
> Key: HDFS-15596
> URL: https://issues.apache.org/jira/browse/HDFS-15596
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The ViewHDFS#create(f, permission, cflags, bufferSize, replication, 
> blockSize, progress, checksumOpt) API already available in FileSystem. It 
> will use other overloaded API and finally can go to ViewFileSystem. This case 
> works in regular ViewFileSystem also. With ViewHDFS, we restricted this to 
> DFS only which cause discp to fail when target is non hdfs as it's using this 
> API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Hui Fei is a new Apache Hadoop Committer

2020-09-24 Thread epa...@apache.org
Congratulations Hui Fei!

On Wednesday, September 23, 2020, 1:07:11 PM CDT, Wei-Chiu Chuang 
 wrote: 
I am pleased to announce that Hui Fei has accepted the invitation to become
a Hadoop committer.

He started contributing to the project in October 2016. Over the past 4
years he has contributed a lot in HDFS, especially in Erasure Coding,
Hadoop 3 upgrade, RBF and Standby Serving reads.

One of the biggest contributions is Hadoop 2->3 rolling upgrade support.
This was a major blocker for any existing Hadoop users to adopt Hadoop 3.
The adoption of Hadoop 3 has gone up after this. In the past the community
discussed a lot about Hadoop 3 rolling upgrade being a must-have, but no
one took the initiative to make it happen. I am personally very grateful
for this.

The work on EC is impressive as well. He managed to onboard EC in
production at scale, fixing tricky problems. Again, I am impressed and
grateful for the contribution in EC.

In addition to code contributions, he invested a lot in the community:

>
>    - Apache Hadoop Community 2019 Beijing Meetup
>    https://blogs.apache.org/hadoop/entry/hadoop-community-meetup-beijing-aug 
>where
>    he discussed the operational experience of RBF in production
>
>
>    - Apache Hadoop Storage Community Sync Online
>    
>https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit#heading=h.irqxw1iy16zo
> where
>    he discussed the Hadoop 3 rolling upgrade support
>
>
Let's congratulate Hui for this new role!

Cheers,
Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Lisheng Sun is a new Apache Hadoop Committer

2020-09-24 Thread Eric Payne
Congratulations, Lisheng Sun!



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-09-24 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/

[Sep 23, 2020 5:50:28 AM] (noreply) YARN-6754. Fair scheduler docs should 
explain meaning of weight=0 for a queue. (#2300)
[Sep 23, 2020 3:42:56 PM] (Adam Antal) YARN-10443. Document options of logs 
CLI. Contributed by Ankit Kumar.
[Sep 23, 2020 3:59:00 PM] (tmarq) HADOOP-17279: ABFS: 
testNegativeScenariosForCreateOverwriteDisabled fails for non-HNS account.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.metrics2.lib.TestMutableMetrics 
   hadoop.metrics2.source.TestJvmMetrics 
   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.TestFileChecksumCompositeCrc 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithInProgressTailing 
   hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier 
   hadoop.hdfs.server.namenode.ha.TestHAAppend 
   hadoop.hdfs.TestSnapshotCommands 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.v2.hs.TestJobHistoryParsing 
   hadoop.mapred.TestMRTimelineEventHandling 
   hadoop.mapred.gridmix.TestGridMixClasses 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/diff-compile-cc-root.txt
  [48K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/diff-compile-javac-root.txt
  [568K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/whitespace-eol.txt
  [13M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/whitespace-tabs.txt
  [1.9M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/xml.txt
  [24K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/diff-javadoc-javadoc-root.txt
  [1.3M]

   findbugs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/branch-findbugs-root.txt
  [404K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server_hadoop-yarn-server-timelineservice-hbase-server-2.txt
  [0]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [232K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [544K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/275

[jira] [Created] (HDFS-15598) ViewHDFS#canonicalizeUri should not be restricted to DFS only API.

2020-09-24 Thread Uma Maheswara Rao G (Jira)
Uma Maheswara Rao G created HDFS-15598:
--

 Summary: ViewHDFS#canonicalizeUri should not be restricted to DFS 
only API.
 Key: HDFS-15598
 URL: https://issues.apache.org/jira/browse/HDFS-15598
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.4.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G


As part of HIve Partitions verification, insert failed due to canonicalizeUri 
restricted to DFS only. This can be relaxed and delegate to vfs#canonicalizeUri



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15599) RBF: Add API to expose resolved destinations (namespace) in Router

2020-09-24 Thread Fengnan Li (Jira)
Fengnan Li created HDFS-15599:
-

 Summary: RBF: Add API to expose resolved destinations (namespace) 
in Router
 Key: HDFS-15599
 URL: https://issues.apache.org/jira/browse/HDFS-15599
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Fengnan Li
Assignee: Fengnan Li


We have seen quite often requests like where a path in Router is actually 
pointed. Two main use cases are:

1) Calculate the HDFS capacity usage allocation of all Hive tables, whose have 
onboarded to Router.

2) A failure prevention method for cross-cluster rename. First check the source 
HDFS location and dest HDFS location, and then issue a distcp cmd if possible 
to avoid the Exception.

Inside Router, the function getLocationsForPath does the work but it is 
internal only and not visible to Clients.

RouterAdmin has getMountTableEntries but this is a cast of Mount table without 
any resolving.

 

We are proposing adding such an API, and there are two ways:

1) Adding this API in RouterRpcServer, which requires the change in 
ClientNameNodeProtocol to include this new API. 

2) Adding this API in RouterAdminServer, which requires the a protocol between 
Client and the admin server.

 

There is one existing resolvePath in FileSystem which can be used to implement 
this call from client side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [ANNOUNCE] Lisheng Sun is a new Apache Hadoop Committer

2020-09-24 Thread Xun Liu
Lisheng,
Congratulations!

On Thu, Sep 24, 2020 at 9:59 PM Adam Antal 
wrote:

> Congratulations!
>
> Regards,
> Adam
>
> On Thu, Sep 24, 2020 at 1:32 PM Rudolf Reti 
> wrote:
>
> > Congrats! :)
> >
> > On Thu, Sep 24, 2020 at 9:48 AM runlin zhang 
> wrote:
> >
> > > Congratulations  Lisheng !
> > >
> > > > 在 2020年9月24日,上午2:00,Wei-Chiu Chuang  写道:
> > > >
> > > > I am pleased to announce that Lisheng Sun has accepted the invitation
> > to
> > > > become a Hadoop committer.
> > > >
> > > > Lisheng actively contributed to the project since July 2019, and he
> > > > contributed two new features: Dead datanode detector (HDFS-13571
> > > > ) and a new du
> > > > implementation (HDFS-14313
> > > > ) Lots of
> > improvements
> > > > including a number of short circuit read optimization
> > > > HDFS-15161  ,
> > > speeding up
> > > > NN fsimage loading time: HDFS-13694
> > > >  and HDFS-13693
> > > > . Code wise, he
> > > resolved
> > > > 57 Hadoop jiras.
> > > >
> > > > Let's congratulate Lisheng for this new role!
> > > >
> > > > Cheers,
> > > > Wei-Chiu Chuang (on behalf of the Apache Hadoop PMC)
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
> > >
> > >
> >
> > --
> > Rudolf Reti
> > Engineering Manager, RM Team
> > 
> >
>


[jira] [Resolved] (HDFS-14813) RBF: Make Global quota and Remote quota consistent.

2020-09-24 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun resolved HDFS-14813.

Resolution: Fixed

Resolve this issue.

> RBF: Make Global quota and Remote quota consistent.
> ---
>
> Key: HDFS-14813
> URL: https://issues.apache.org/jira/browse/HDFS-14813
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
>
> Make Global quota and remote quota consistent.
>  (Global quota: the quota on mount table, Remote quota: the quota on 
> namespace)
> HDFS administrator can use global quota to simplify the management for 
> federation paths. But there is no consistent constraint for the global quota 
> and the remote quota. As an hdfs administrator the inconsistent has 3 
> disadvantages on management:
>      1. The quota part of getQuotaUsage() on a federation path is not 
> helpful. It's neither the global quota nor one of the remote quotas.
>      2. The global quota could be different with the remote quota. When a 
> QuotaExceedException happens it needs the administrator to find out whether 
> it's a violation of the global quota or the remote quota.
>      3. For management simplicity, it's always a good idea to keep the global 
> quota and the remote quota the same. Now we need the administrator to keep 
> the consistent manually.
>  My proposal is to add a constraint for global quota: 
>      1. For federation paths, global quota could be inherited from parent 
> federation path.
>      2. For all remote paths in mount tables, the remote quotas must be 
> consistent with the global quotas.
>  To implement this, my idea is:
>      1. Global quota could be inherited. Add a method getGlobalQuota(String 
> path) to Quota.java returning the global quota.
>      2. Each time RouterQuotaUpdateService updates the quota usage for mount 
> table entries, it also checks and updates the remote quota.
>      3. When getQuotaUsage() on a federation path, return the global quota.
>      4. When setQuota() on a federation path, first update the global quota 
> in mount table, then recompute global quota for the current path and its 
> children paths, finally update all the federation paths.
>  
> Implement 1+2 in HDFS-14814
> Implement 4 in HDFS-14815
> Implement 3 in HDFS-14955



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[VOTE] Moving Ozone to a separated Apache project

2020-09-24 Thread Elek, Marton

Hi all,

Thank you for all the feedback and requests,

As we discussed in the previous thread(s) [1], Ozone is proposed to be a 
separated Apache Top Level Project (TLP)


The proposal with all the details, motivation and history is here:

https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Hadoop+subproject+to+Apache+TLP+proposal

This voting runs for 7 days and will be concluded at 2nd of October, 6AM 
GMT.


Thanks,
Marton Elek

[1]: 
https://lists.apache.org/thread.html/rc6c79463330b3e993e24a564c6817aca1d290f186a1206c43ff0436a%40%3Chdfs-dev.hadoop.apache.org%3E


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org