[jira] [Work logged] (HDFS-15568) namenode start failed to start when dfs.namenode.snapshot.max.limit set

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15568?focusedWorklogId=484946&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484946
 ]

ASF GitHub Bot logged work on HDFS-15568:
-

Author: ASF GitHub Bot
Created on: 16/Sep/20 05:51
Start Date: 16/Sep/20 05:51
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #2296:
URL: https://github.com/apache/hadoop/pull/2296#issuecomment-693187529


   /retest



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484946)
Time Spent: 3h  (was: 2h 50m)

> namenode start failed to start when dfs.namenode.snapshot.max.limit set
> ---
>
> Key: HDFS-15568
> URL: https://issues.apache.org/jira/browse/HDFS-15568
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> {code:java}
> 11:35:05.872 AM   ERROR   NameNode
> Failed to start namenode.
> org.apache.hadoop.hdfs.protocol.SnapshotException: Failed to add snapshot: 
> there are already 20 snapshot(s) and the max snapshot limit is 20
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.addSnapshot(DirectorySnapshottableFeature.java:181)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.addSnapshot(INodeDirectory.java:285)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.createSnapshot(SnapshotManager.java:447)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:802)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737)
> {code}
> Steps to reproduce:
> --
> directory level snapshot limit set - 100
> Created 100 snapshots
> deleted all 100 snapshots (in-oder)
> No snapshot exist
> Then, directory level snapshot limit set - 20
> HDFS restart
> Namenode start failed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15568) namenode start failed to start when dfs.namenode.snapshot.max.limit set

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15568?focusedWorklogId=484944&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484944
 ]

ASF GitHub Bot logged work on HDFS-15568:
-

Author: ASF GitHub Bot
Created on: 16/Sep/20 05:50
Start Date: 16/Sep/20 05:50
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #2296:
URL: https://github.com/apache/hadoop/pull/2296#issuecomment-693187393


   retest



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484944)
Time Spent: 2h 50m  (was: 2h 40m)

> namenode start failed to start when dfs.namenode.snapshot.max.limit set
> ---
>
> Key: HDFS-15568
> URL: https://issues.apache.org/jira/browse/HDFS-15568
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> {code:java}
> 11:35:05.872 AM   ERROR   NameNode
> Failed to start namenode.
> org.apache.hadoop.hdfs.protocol.SnapshotException: Failed to add snapshot: 
> there are already 20 snapshot(s) and the max snapshot limit is 20
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature.addSnapshot(DirectorySnapshottableFeature.java:181)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.addSnapshot(INodeDirectory.java:285)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.createSnapshot(SnapshotManager.java:447)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:802)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:912)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:760)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1164)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:755)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:646)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:717)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:960)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:933)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1670)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1737)
> {code}
> Steps to reproduce:
> --
> directory level snapshot limit set - 100
> Created 100 snapshots
> deleted all 100 snapshots (in-oder)
> No snapshot exist
> Then, directory level snapshot limit set - 20
> HDFS restart
> Namenode start failed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15578) Fix the rename issues with fallback fs enabled

2020-09-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196662#comment-17196662
 ] 

Hadoop QA commented on HDFS-15578:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
54s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
46s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
17s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
10s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
42s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
49s{color} | {color:green} root: The patch generated 0 new + 192 unchanged - 8 
fixed = 192 total (was 200) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
44s{color} | {color:green} the patch passe

[jira] [Work logged] (HDFS-15578) Fix the rename issues with fallback fs enabled

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15578?focusedWorklogId=484910&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484910
 ]

ASF GitHub Bot logged work on HDFS-15578:
-

Author: ASF GitHub Bot
Created on: 16/Sep/20 04:09
Start Date: 16/Sep/20 04:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2305:
URL: https://github.com/apache/hadoop/pull/2305#issuecomment-693157702


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  4s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 37s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 54s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 46s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 27s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 48s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 43s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 59s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 17s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 26s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 10s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 10s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 42s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 42s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  root: The patch generated 0 
new + 192 unchanged - 8 fixed = 192 total (was 200)  |
   | +1 :green_heart: |  mvnsite  |   2m 44s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 49s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 44s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 52s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  | 108m  2s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 299m 33s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2305/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2305 |
   | JIRA Issue | HDFS-15578 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux cdc9bca802c7 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5c5b2ed7c7f |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu11

[jira] [Work logged] (HDFS-15578) Fix the rename issues with fallback fs enabled

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15578?focusedWorklogId=484898&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484898
 ]

ASF GitHub Bot logged work on HDFS-15578:
-

Author: ASF GitHub Bot
Created on: 16/Sep/20 03:20
Start Date: 16/Sep/20 03:20
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on a change in pull request #2305:
URL: https://github.com/apache/hadoop/pull/2305#discussion_r489137266



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystemWithMountLinks.java
##
@@ -61,4 +64,55 @@ public void testCreateOnRoot() throws Exception {
   public void testMountLinkWithNonExistentLink() throws Exception {
 testMountLinkWithNonExistentLink(false);
   }
+
+  @Test
+  public void testRenameOnInternalDirWithFallback() throws Exception {
+Configuration conf = getConf();
+URI defaultFSURI =
+URI.create(conf.get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY));
+final Path hdfsTargetPath1 = new Path(defaultFSURI + "/HDFSUser");
+final Path hdfsTargetPath2 = new Path(defaultFSURI + "/NewHDFSUser/next");
+ViewFsTestSetup.addMountLinksToConf(defaultFSURI.getAuthority(),
+new String[] {"/HDFSUser", "/NewHDFSUser/next"},
+new String[] {hdfsTargetPath1.toUri().toString(),
+hdfsTargetPath2.toUri().toString()}, conf);
+//Making sure parent dir structure as mount points available in fallback.
+try (DistributedFileSystem dfs = new DistributedFileSystem()) {
+  dfs.initialize(defaultFSURI, conf);
+  dfs.mkdirs(hdfsTargetPath1);
+  dfs.mkdirs(hdfsTargetPath2);
+}
+
+try (FileSystem fs = FileSystem.get(conf)) {
+  Path src = new Path("/newFileOnRoot");
+  Path dst = new Path("/newFileOnRoot1");
+  fs.create(src).close();
+  verifyRename(fs, src, dst);
+
+  src = new Path("/newFileOnRoot1");
+  dst = new Path("/NewHDFSUser/newFileOnRoot");
+  fs.mkdirs(dst.getParent());
+  verifyRename(fs, src, dst);
+
+  src = new Path("/NewHDFSUser/newFileOnRoot");
+  dst = new Path("/NewHDFSUser/newFileOnRoot1");
+  verifyRename(fs, src, dst);
+
+  src = new Path("/NewHDFSUser/newFileOnRoot1");
+  dst = new Path("/newFileOnRoot");
+  verifyRename(fs, src, dst);
+
+  src = new Path("/HDFSUser/newFileOnRoot1");
+  dst = new Path("/HDFSUser/newFileOnRoot");
+  fs.create(src).close();
+  verifyRename(fs, src, dst);
+}
+  }
+
+  private void verifyRename(FileSystem fs, Path src, Path dst)
+  throws IOException {
+fs.rename(src, dst);
+Assert.assertFalse(fs.exists(src));
+Assert.assertTrue(fs.exists(dst));
+  }

Review comment:
   I tried a couple of cases of rename. Can you give a check once.
   ``` Test
 public void testRenameOnInternalDirWithFallback() throws Exception {
   Configuration conf = getConf();
   URI defaultFSURI =
   URI.create(conf.get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY));
   final Path hdfsTargetPath1 = new Path(defaultFSURI + "/HDFSUser");
   final Path hdfsTargetPath2 = new Path(defaultFSURI + "/dstNewHDFSUser"
   + "/next");
   ViewFsTestSetup.addMountLinksToConf(defaultFSURI.getAuthority(),
   new String[] {"/HDFSUser", "/NewHDFSUser/next/next1"},
   new String[] {hdfsTargetPath1.toUri().toString(),
   hdfsTargetPath2.toUri().toString()}, conf);
   //Making sure parent dir structure as mount points available in fallback.
   try (DistributedFileSystem dfs = new DistributedFileSystem()) {
 dfs.initialize(defaultFSURI, conf);
 dfs.mkdirs(hdfsTargetPath1);
 dfs.mkdirs(hdfsTargetPath2);
   }
   
   try (FileSystem fs = FileSystem.get(conf)) {
 // Case : 1
 Path src = new Path("/newFileOnRoot");
 Path dst = new Path("/NewHDFSUser/next");
 fs.create(src).close();
 verifyRename(fs, src, dst); // Fails. Shouldn't it move to
 // /NewHDFSUser/next/newFileOnRoot ?
   
  src = new Path("/newFileOnRoot");
  dst = new Path("/NewHDFSUser/next/file");
 verifyRename(fs, src, dst); // Fails. Guess since the parent structure
 // isn't there at fallback?```
   
   I couldn't check more, but
   CASE 1: if the destination is a directory, shouldn't it move src inside it?
   CASE:2 Seems due to parent structure isn't there in fallback?
   
   I didn't try with ViewFs, maybe something similar there as well





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking

[jira] [Work logged] (HDFS-15237) Get checksum of EC file failed, when some block is missing or corrupt

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15237?focusedWorklogId=484897&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484897
 ]

ASF GitHub Bot logged work on HDFS-15237:
-

Author: ASF GitHub Bot
Created on: 16/Sep/20 03:19
Start Date: 16/Sep/20 03:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2306:
URL: https://github.com/apache/hadoop/pull/2306#issuecomment-693145756


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   4m  3s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  28m 52s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 11s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 28s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 28s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 26s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 19s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m 19s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 12s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  shadedclient  |   3m  9s |  patch has errors when building and 
testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   0m 23s |  hadoop-hdfs in the patch failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  findbugs  |   0m 23s |  hadoop-hdfs in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 23s |  hadoop-hdfs in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 23s |  ASF License check generated no 
output?  |
   |  |   |  75m 59s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2306/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2306 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 90b8299ad956 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5c5b2ed7c7f |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2306/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt
 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2306/2/artifact/out/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2306/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2306/2/testReport/ |
   | Ma

[jira] [Work logged] (HDFS-15237) Get checksum of EC file failed, when some block is missing or corrupt

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15237?focusedWorklogId=484881&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484881
 ]

ASF GitHub Bot logged work on HDFS-15237:
-

Author: ASF GitHub Bot
Created on: 16/Sep/20 02:34
Start Date: 16/Sep/20 02:34
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on pull request #2306:
URL: https://github.com/apache/hadoop/pull/2306#issuecomment-693133341


   Looks really good. Thanks for the patch! If Zhao Yi can come back with the 
test result, that would be great but I am happy to merge the change in the next 
few days if no update.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484881)
Time Spent: 0.5h  (was: 20m)

> Get checksum of EC file failed, when some block is missing or corrupt
> -
>
> Key: HDFS-15237
> URL: https://issues.apache.org/jira/browse/HDFS-15237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2
>
> Attachments: HDFS-15237.001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When we distcp from an ec directory to another one, I found some error like 
> this.
> {code}
> 2020-03-20 20:18:21,366 WARN [main] 
> org.apache.hadoop.hdfs.FileChecksumHelper: src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]2020-03-20
>  20:18:21,366 WARN [main] org.apache.hadoop.hdfs.FileChecksumHelper: 
> src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]java.io.EOFException:
>  Unexpected EOF while trying to read response from server at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:550)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.tryDatanode(FileChecksumHelper.java:709)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlockGroup(FileChecksumHelper.java:664)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:638)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252)
>  at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1790) 
> at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1810)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1691)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1688)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1700)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:138)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:259)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:220) at 
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}
> And Then I found some error in datanode like this
> {code}
> 2020-03-20 20:54:16,573 INFO 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: 
> SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = 
> false
> 2020-03-20 20:54:16,577 ERROR 
> org.apache.hadoop.hdfs.se

[jira] [Work logged] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15516?focusedWorklogId=484874&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484874
 ]

ASF GitHub Bot logged work on HDFS-15516:
-

Author: ASF GitHub Bot
Created on: 16/Sep/20 02:15
Start Date: 16/Sep/20 02:15
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on a change in pull request #2281:
URL: https://github.com/apache/hadoop/pull/2281#discussion_r489120534



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -2639,10 +2639,11 @@ HdfsFileStatus startFile(String src, PermissionStatus 
permissions,
   createParent, replication, blockSize, supportedVersions, 
ecPolicyName,
   storagePolicy, logRetryCache);
 } catch (AccessControlException e) {
-  logAuditEvent(false, "create", src);
+  logAuditEvent(false, "create(options=" + flag.toString() + ")", src);

Review comment:
   thanks for your suggestion.
   I will modify it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484874)
Time Spent: 1h 20m  (was: 1h 10m)

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch, HDFS-15516.004.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15516?focusedWorklogId=484875&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484875
 ]

ASF GitHub Bot logged work on HDFS-15516:
-

Author: ASF GitHub Bot
Created on: 16/Sep/20 02:15
Start Date: 16/Sep/20 02:15
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on a change in pull request #2281:
URL: https://github.com/apache/hadoop/pull/2281#discussion_r489120569



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogger.java
##
@@ -231,6 +232,30 @@ public void testAuditLoggerWithSetPermission() throws 
IOException {
 }
   }
 
+  @Test
+  public void testAuditLoggerWithCreateFlag() throws IOException {
+final Configuration conf = new HdfsConfiguration();
+final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();
+LogCapturer auditlog = LogCapturer.captureLogs(FSNamesystem.auditLog);
+try {
+  cluster.waitClusterUp();
+  auditlog.clearOutput();
+  final FileSystem fs = cluster.getFileSystem();
+  final Path p = new Path("/debug.log");
+  FSDataOutputStream stm = fs.create(p);
+  assertTrue(auditlog.getOutput().contains("create(options"));

Review comment:
   thanks for your suggestion.
   I will modify it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484875)
Time Spent: 1.5h  (was: 1h 20m)

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch, HDFS-15516.004.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15329) Provide FileContext based ViewFSOverloadScheme implementation

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15329?focusedWorklogId=484857&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484857
 ]

ASF GitHub Bot logged work on HDFS-15329:
-

Author: ASF GitHub Bot
Created on: 16/Sep/20 01:50
Start Date: 16/Sep/20 01:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2225:
URL: https://github.com/apache/hadoop/pull/2225#issuecomment-693121106


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 22s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m  8s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m  1s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 55s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  7s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 32s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 16s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 20s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 43s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 32s |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvninstall  |   1m  4s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   1m  4s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |   1m  4s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 56s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |   0m 56s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  checkstyle  |   2m 25s |  root: The patch generated 0 
new + 52 unchanged - 1 fixed = 52 total (was 53)  |
   | -1 :x: |  mvnsite  |   0m 37s |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvnsite  |   1m  7s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  shadedclient  |   0m 41s |  patch has errors when building and 
testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 46s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   0m 32s |  hadoop-common in the patch failed.  |
   | -1 :x: |  findbugs  |   1m  6s |  hadoop-hdfs in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 33s |  hadoop-common in the patch failed.  |
   | -1 :x: |  unit  |   1m  6s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 26s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 121m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 611b243695f6 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5c5b2ed7c7f |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64

[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding

2020-09-15 Thread Janus Chow (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196614#comment-17196614
 ] 

Janus Chow commented on HDFS-15579:
---

There is already some instruction for this constructor.
{code:java}
/**
 * Create a path location from another path with the destinations sorted.
 *
 * @param other Other path location to copy from.
 * @param firstNsId Identifier of the namespace to place first.
 */
{code}
The problem comes to me is that when I was reading the code of 
MultipleDestinationMountTableResolver calling this constructor, the first idea 
comes to me is that this constructor is to create a PathLocation with a 
specified nsId, and I read on, I wouldn't get into the class to check more 
details about this constructor.

So I wonder if it can be more specific from the name, like a function named 
"sortDestination" returning a new PathLocation instead of a constructor. Just 
wondering.

> RBF: The constructor of PathLocation may got some misunderstanding
> --
>
> Key: HDFS-15579
> URL: https://issues.apache.org/jira/browse/HDFS-15579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Janus Chow
>Priority: Minor
>
> There is a constructor of PathLocation as follows, it's for creating a new 
> PathLocation with a prioritised nsId. 
>  
> {code:java}
> public PathLocation(PathLocation other, String firstNsId) {
>   this.sourcePath = other.sourcePath;
>   this.destOrder = other.destOrder;
>   this.destinations = orderedNamespaces(other.destinations, firstNsId);
> }
> {code}
> When I was reading the code of MultipleDestinationMountTableResolver, I 
> thought this constructor was to create a PathLocation with an override 
> destination. It took me a while before I realize this is a constructor to 
> sort the destinations inside.
> Maybe I think this constructor can be more clear about its usage?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15329) Provide FileContext based ViewFSOverloadScheme implementation

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15329?focusedWorklogId=484821&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484821
 ]

ASF GitHub Bot logged work on HDFS-15329:
-

Author: ASF GitHub Bot
Created on: 15/Sep/20 23:47
Start Date: 15/Sep/20 23:47
Worklog Time Spent: 10m 
  Work Description: abhishekdas99 commented on pull request #2225:
URL: https://github.com/apache/hadoop/pull/2225#issuecomment-693078970


   > The latest patch looks good to me. There is one findbug popped up. Could 
you please take a look at it. Otherwise I am +1
   
   The findbug issue is not part of the new change. I fixed the issue.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484821)
Time Spent: 1h 40m  (was: 1.5h)

> Provide FileContext based ViewFSOverloadScheme implementation
> -
>
> Key: HDFS-15329
> URL: https://issues.apache.org/jira/browse/HDFS-15329
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, hdfs, viewfs, viewfsOverloadScheme
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Abhishek Das
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This Jira to track for FileContext based ViewFSOverloadScheme implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15289) Allow viewfs mounts with HDFS/HCFS scheme and centralized mount table

2020-09-15 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-15289:
---
Target Version/s: 3.3.1, 3.4.0, 3.1.5, 3.2.3  (was: 3.2.2, 3.3.1, 3.4.0, 
3.1.5)

> Allow viewfs mounts with HDFS/HCFS scheme and centralized mount table
> -
>
> Key: HDFS-15289
> URL: https://issues.apache.org/jira/browse/HDFS-15289
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Attachments: ViewFSOverloadScheme - V1.0.pdf, ViewFSOverloadScheme.png
>
>
> ViewFS provides flexibility to mount different filesystem types with mount 
> points configuration table. This approach is solving the scalability 
> problems, but users need to reconfigure the filesystem to ViewFS and to its 
> scheme.  This will be problematic in the case of paths persisted in meta 
> stores, ex: Hive. In systems like Hive, it will store uris in meta store. So, 
> changing the file system scheme will create a burden to upgrade/recreate meta 
> stores. In our experience many users are not ready to change that.  
> Router based federation is another implementation to provide coordinated 
> mount points for HDFS federation clusters. Even though this provides 
> flexibility to handle mount points easily, this will not allow 
> other(non-HDFS) file systems to mount. So, this does not solve the purpose 
> when users want to mount external(non-HDFS) filesystems.
> So, the problem here is: Even though many users want to adapt to the scalable 
> fs options available, technical challenges of changing schemes (ex: in meta 
> stores) in deployments are obstructing them. 
> So, we propose to allow hdfs scheme in ViewFS like client side mount system 
> and provision user to create mount links without changing URI paths. 
> I will upload detailed design doc shortly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15289) Allow viewfs mounts with HDFS/HCFS scheme and centralized mount table

2020-09-15 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196589#comment-17196589
 ] 

Uma Maheswara Rao G commented on HDFS-15289:


[~hexiaoqiao], Thanks for ping. Sure we can set to 3.2.3. 

> Allow viewfs mounts with HDFS/HCFS scheme and centralized mount table
> -
>
> Key: HDFS-15289
> URL: https://issues.apache.org/jira/browse/HDFS-15289
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Attachments: ViewFSOverloadScheme - V1.0.pdf, ViewFSOverloadScheme.png
>
>
> ViewFS provides flexibility to mount different filesystem types with mount 
> points configuration table. This approach is solving the scalability 
> problems, but users need to reconfigure the filesystem to ViewFS and to its 
> scheme.  This will be problematic in the case of paths persisted in meta 
> stores, ex: Hive. In systems like Hive, it will store uris in meta store. So, 
> changing the file system scheme will create a burden to upgrade/recreate meta 
> stores. In our experience many users are not ready to change that.  
> Router based federation is another implementation to provide coordinated 
> mount points for HDFS federation clusters. Even though this provides 
> flexibility to handle mount points easily, this will not allow 
> other(non-HDFS) file systems to mount. So, this does not solve the purpose 
> when users want to mount external(non-HDFS) filesystems.
> So, the problem here is: Even though many users want to adapt to the scalable 
> fs options available, technical challenges of changing schemes (ex: in meta 
> stores) in deployments are obstructing them. 
> So, we propose to allow hdfs scheme in ViewFS like client side mount system 
> and provision user to create mount links without changing URI paths. 
> I will upload detailed design doc shortly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15329) Provide FileContext based ViewFSOverloadScheme implementation

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15329?focusedWorklogId=484816&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484816
 ]

ASF GitHub Bot logged work on HDFS-15329:
-

Author: ASF GitHub Bot
Created on: 15/Sep/20 23:18
Start Date: 15/Sep/20 23:18
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on pull request #2225:
URL: https://github.com/apache/hadoop/pull/2225#issuecomment-693028476


   The latest patch looks good to me. There is one findbug popped up. Could you 
please take a look at it. Otherwise I am +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484816)
Time Spent: 1.5h  (was: 1h 20m)

> Provide FileContext based ViewFSOverloadScheme implementation
> -
>
> Key: HDFS-15329
> URL: https://issues.apache.org/jira/browse/HDFS-15329
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, hdfs, viewfs, viewfsOverloadScheme
>Affects Versions: 3.2.1
>Reporter: Uma Maheswara Rao G
>Assignee: Abhishek Das
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This Jira to track for FileContext based ViewFSOverloadScheme implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15578) Fix the rename issues with fallback fs enabled

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15578?focusedWorklogId=484812&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484812
 ]

ASF GitHub Bot logged work on HDFS-15578:
-

Author: ASF GitHub Bot
Created on: 15/Sep/20 22:55
Start Date: 15/Sep/20 22:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2305:
URL: https://github.com/apache/hadoop/pull/2305#issuecomment-693021319


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  32m 54s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 56s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 32s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  19m 41s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  1s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  3s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 47s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 21s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 42s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m  9s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 16s |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 55s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  22m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 32s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  20m 32s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 56s |  root: The patch generated 5 new 
+ 192 unchanged - 8 fixed = 197 total (was 200)  |
   | +1 :green_heart: |  mvnsite  |   2m 59s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  17m  3s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 27s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   7m  0s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  11m  4s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  | 113m  7s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 355m 56s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
   |   | hadoop.hdfs.TestEncryptionZones |
   |   | hadoop.hdfs.TestDecommission |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.server.mover.TestStorageMover |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
   |   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2305/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://g

[jira] [Commented] (HDFS-15578) Fix the rename issues with fallback fs enabled

2020-09-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196583#comment-17196583
 ] 

Hadoop QA commented on HDFS-15578:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 32m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 
32s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
41s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
42s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
55s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
32s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 56s{color} | {color:orange} root: The patch generated 5 new + 192 unchanged 
- 8 fixed = 197 total (was 200) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m  
0s{color} | {color:green} the patch p

[jira] [Commented] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196562#comment-17196562
 ] 

Hadoop QA commented on HDFS-15574:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
58s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} branch-3.2 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
15s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}133m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}222m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestQuota |
|   | hadoop.hdfs.TestModTime |
|   | hadoop.hdfs.server.namenode.TestReencryption |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.server.namenode.TestINodeAttributeProvider |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.TestRestartDFS |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.server.namenode.TestEditLogRace |
|   | hadoop.hdfs.TestWriteReadStripedFile |
|   | hadoop.hdfs.server.namenode.TestProtectedDirectories |
|   | hadoop.hdfs.TestLease |
|   | hadoop.hdfs.TestSetrepDecreasing |
|   | hadoop.hdfs.TestDFSUpgrade |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.TestQuotaByStorageType |
|   | hadoop.hdfs.server.namenode.TestDeadDatanode |
|   | hadoop.hdfs.TestDistributedFileSystemWithECFile |
|   | hadoop.hdfs.server.namenode.TestCheckpoi

[jira] [Commented] (HDFS-15569) Speed up the Storage#doRecover during datanode rolling upgrade

2020-09-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196554#comment-17196554
 ] 

Hadoop QA commented on HDFS-15569:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m 
13s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 64 unchanged - 3 fixed = 65 total (was 67) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 34s{co

[jira] [Commented] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-15 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196515#comment-17196515
 ] 

Íñigo Goiri commented on HDFS-15516:


Regarding [^HDFS-15516.004.patch], if we do + we don't need to do 
{{.toString()}}.
Regarding, compatibility of the logs, [~ayushtkn], you have gone through 
compatibility guidelines before, do you remember what is the call here?

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch, HDFS-15516.004.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196477#comment-17196477
 ] 

Hadoop QA commented on HDFS-15415:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
58s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 19 unchanged - 0 fixed = 20 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 46s{co

[jira] [Work logged] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15516?focusedWorklogId=484667&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484667
 ]

ASF GitHub Bot logged work on HDFS-15516:
-

Author: ASF GitHub Bot
Created on: 15/Sep/20 17:29
Start Date: 15/Sep/20 17:29
Worklog Time Spent: 10m 
  Work Description: sunchao commented on a change in pull request #2281:
URL: https://github.com/apache/hadoop/pull/2281#discussion_r488839078



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -2639,10 +2639,11 @@ HdfsFileStatus startFile(String src, PermissionStatus 
permissions,
   createParent, replication, blockSize, supportedVersions, 
ecPolicyName,
   storagePolicy, logRetryCache);
 } catch (AccessControlException e) {
-  logAuditEvent(false, "create", src);
+  logAuditEvent(false, "create(options=" + flag.toString() + ")", src);

Review comment:
   can we leave a space after "create" so it is consistent with rename? 
e.g., "create (options=..."

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogger.java
##
@@ -231,6 +232,30 @@ public void testAuditLoggerWithSetPermission() throws 
IOException {
 }
   }
 
+  @Test
+  public void testAuditLoggerWithCreateFlag() throws IOException {
+final Configuration conf = new HdfsConfiguration();
+final MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();
+LogCapturer auditlog = LogCapturer.captureLogs(FSNamesystem.auditLog);
+try {
+  cluster.waitClusterUp();
+  auditlog.clearOutput();
+  final FileSystem fs = cluster.getFileSystem();
+  final Path p = new Path("/debug.log");
+  FSDataOutputStream stm = fs.create(p);
+  assertTrue(auditlog.getOutput().contains("create(options"));

Review comment:
   can we test a few more cases? e.g., with or without flag, and check 
whether the content of the flag is actually logged correctly.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484667)
Time Spent: 1h 10m  (was: 1h)

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch, HDFS-15516.004.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15543) RBF: Write Should allow, when a subcluster is unavailable for RANDOM mount points with fault Tolerance enabled.

2020-09-15 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196412#comment-17196412
 ] 

Íñigo Goiri commented on HDFS-15543:


I'm surprised that it still applies.
Anyway, to make it a little more readable, let's extract {{new IOException("No 
namespace available.")}} and add a comment.
We should make the javadoc for invokeOnNs() a little more descriptive about the 
exception, actually the param is not even there.
For the unit test, let's add some more comments.

> RBF: Write Should allow, when a subcluster is unavailable for RANDOM mount 
> points with fault Tolerance enabled. 
> 
>
> Key: HDFS-15543
> URL: https://issues.apache.org/jira/browse/HDFS-15543
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Hemanth Boyina
>Priority: Major
> Attachments: HDFS-15543.001.patch, HDFS-15543.002.patch, 
> HDFS-15543.003.patch, HDFS-15543_testrepro.patch
>
>
> A RANDOM mount point should allow to creating new files if one subcluster is 
> down also with Fault Tolerance was enabled. but here it's failed.
> MultiDestination_client]# hdfs dfsrouteradmin -ls /test_ec
> *Mount Table Entries:*
> Source Destinations Owner Group Mode Quota/Usage
> /test_ec *hacluster->/tes_ec,hacluster1->/tes_ec* test ficommon rwxr-xr-x 
> [NsQuota: -/-, SsQuota: -/-]
> *File Write throne the Exception:-*
> 2020-08-26 19:13:21,839 WARN hdfs.DataStreamer: Abandoning blk_1073743375_2551
>  2020-08-26 19:13:21,877 WARN hdfs.DataStreamer: Excluding datanode 
> DatanodeInfoWithStorage[DISK]
>  2020-08-26 19:13:21,878 WARN hdfs.DataStreamer: DataStreamer Exception
>  java.io.IOException: Unable to create new block.
>  at 
> org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1758)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718)
>  2020-08-26 19:13:21,879 WARN hdfs.DataStreamer: Could not get block 
> locations. Source file "/test_ec/f1._COPYING_" - Aborting...block==null
>  put: Could not get block locations. Source file "/test_ec/f1._COPYING_" - 
> Aborting...block==null



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-15 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196410#comment-17196410
 ] 

Chao Sun commented on HDFS-15516:
-

I think this makes sense given that we already record flags for rename? this 
may break existing parsers who assume there is no flag for create/append op but 
not sure it should be a reason to block this though.

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch, HDFS-15516.004.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding

2020-09-15 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196406#comment-17196406
 ] 

Íñigo Goiri commented on HDFS-15579:


Yes, let's add a javadoc.

> RBF: The constructor of PathLocation may got some misunderstanding
> --
>
> Key: HDFS-15579
> URL: https://issues.apache.org/jira/browse/HDFS-15579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Janus Chow
>Priority: Minor
>
> There is a constructor of PathLocation as follows, it's for creating a new 
> PathLocation with a prioritised nsId. 
>  
> {code:java}
> public PathLocation(PathLocation other, String firstNsId) {
>   this.sourcePath = other.sourcePath;
>   this.destOrder = other.destOrder;
>   this.destinations = orderedNamespaces(other.destinations, firstNsId);
> }
> {code}
> When I was reading the code of MultipleDestinationMountTableResolver, I 
> thought this constructor was to create a PathLocation with an override 
> destination. It took me a while before I realize this is a constructor to 
> sort the destinations inside.
> Maybe I think this constructor can be more clear about its usage?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-15 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15574:
-
Attachment: HDFS-15574.branch-3.2.002.patch

> Remove unnecessary sort of block list in DirectoryScanner
> -
>
> Key: HDFS-15574
> URL: https://issues.apache.org/jira/browse/HDFS-15574
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15574.001.patch, HDFS-15574.002.patch, 
> HDFS-15574.003.patch, HDFS-15574.branch-3.2.001.patch, 
> HDFS-15574.branch-3.2.002.patch, HDFS-15574.branch-3.3.001.patch
>
>
> These lines of code in DirectoryScanner#scan(), obtain a snapshot of the 
> finalized blocks from memory, and then sort them, under the DN lock. However 
> the blocks are stored in a sorted structure (FoldedTreeSet) and hence the 
> sort should be unnecessary.
> {code}
>   final List bl = dataset.getFinalizedBlocks(bpid);
>   Collections.sort(bl); // Sort based on blockId
> {code}
> This Jira removes the sort, and renames the getFinalizedBlocks to 
> getSortedFinalizedBlocks to make the intent of the method more clear.
> Also added a test, just in case the underlying block structure is ever 
> changed to something unsorted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196286#comment-17196286
 ] 

Hadoop QA commented on HDFS-15574:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
49s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} branch-3.2 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
49s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
47s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 255 unchanged - 0 fixed = 256 total (was 255) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}200m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/162/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15574 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13011537/HDFS-15574.branch-3.2.001.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux ae94109b593a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | branch-3.2 / 5b14af6 |
| Default Java | Private Build-

[jira] [Updated] (HDFS-15576) Erasure Coding: Add rs and rs-legacy codec test for addPolicies

2020-09-15 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-15576:

Fix Version/s: 3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for your contribution, [~ferhui]!

> Erasure Coding: Add rs and rs-legacy codec test for addPolicies
> ---
>
> Key: HDFS-15576
> URL: https://issues.apache.org/jira/browse/HDFS-15576
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: HDFS-15576.001.patch, HDFS-15576.002.patch
>
>
> * Add rs and rs-legacy codec test for  TestErasureCodingCLI
> * Add comments for failed test RS
> * Modify UT, change "RS" to "rs", because "RS" is not supported 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15576) Erasure Coding: Add rs and rs-legacy codec test for addPolicies

2020-09-15 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196282#comment-17196282
 ] 

Takanobu Asanuma commented on HDFS-15576:
-

The failed tests are not related to the patch.

> Erasure Coding: Add rs and rs-legacy codec test for addPolicies
> ---
>
> Key: HDFS-15576
> URL: https://issues.apache.org/jira/browse/HDFS-15576
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15576.001.patch, HDFS-15576.002.patch
>
>
> * Add rs and rs-legacy codec test for  TestErasureCodingCLI
> * Add comments for failed test RS
> * Modify UT, change "RS" to "rs", because "RS" is not supported 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15578) Fix the rename issues with fallback fs enabled

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15578?focusedWorklogId=484568&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484568
 ]

ASF GitHub Bot logged work on HDFS-15578:
-

Author: ASF GitHub Bot
Created on: 15/Sep/20 15:45
Start Date: 15/Sep/20 15:45
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #2305:
URL: https://github.com/apache/hadoop/pull/2305#issuecomment-692803460


   >Currently the InodeTree interface audience is Private. I don't believe any 
application holds class in separate jar. or serialize this class result to 
remote service where they will use different jar. Let me know in case if it's 
popping any potential compatibility breakage.
   
   Should be ok then, thanx for confirming



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484568)
Time Spent: 40m  (was: 0.5h)

> Fix the rename issues with fallback fs enabled
> --
>
> Key: HDFS-15578
> URL: https://issues.apache.org/jira/browse/HDFS-15578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When we enable fallback, rename should success if the src.parent or 
> dst.parent on inernalDir.
> {noformat}
> org.apache.hadoop.security.AccessControlException: InternalDir of 
> ViewFileSystem is readonly, operation rename not permitted on path 
> /newFileOnRoot.org.apache.hadoop.security.AccessControlException: InternalDir 
> of ViewFileSystem is readonly, operation rename not permitted on path 
> /newFileOnRoot.
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.readOnlyMountTable(ViewFileSystem.java:95)
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.readOnlyMountTable(ViewFileSystem.java:101)
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.rename(ViewFileSystem.java:683) at 
> org.apache.hadoop.hdfs.ViewDistributedFileSystem.rename(ViewDistributedFileSystem.java:533)
>  at 
> org.apache.hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks.verifyRename(TestViewDistributedFileSystemWithMountLinks.java:114)
>  at 
> org.apache.hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks.testRenameOnInternalDirWithFallback(TestViewDistributedFileSystemWithMountLinks.java:90)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at 
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
>  at 
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
>  at com.in

[jira] [Commented] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-15 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196232#comment-17196232
 ] 

Stephen O'Donnell commented on HDFS-15415:
--

[~weichiu] Thanks for the review. You are correct - getSortedFinalizedBlocks 
already has a readlock, so I can just drop that from the directoryScanner 
completely. Uploaded patch 004 with that change.

> Reduce locking in Datanode DirectoryScanner
> ---
>
> Key: HDFS-15415
> URL: https://issues.apache.org/jira/browse/HDFS-15415
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15415.001.patch, HDFS-15415.002.patch, 
> HDFS-15415.003.patch, HDFS-15415.004.patch
>
>
> In HDFS-15406, we have a small change to greatly reduce the runtime and 
> locking time of the datanode DirectoryScanner. They may be room for further 
> improvement.
> From the scan step, we have captured a snapshot of what is on disk. After 
> calling `dataset.getFinalizedBlocks(bpid);` we have taken a snapshot of in 
> memory. The two snapshots are never 100% in sync as things are always 
> changing as the disk is scanned.
> We are only comparing finalized blocks, so they should not really change:
> * If a block is deleted after our snapshot, our snapshot will not see it and 
> that is OK.
> * A finalized block could be appended. If that happens both the genstamp and 
> length will change, but that should be handled by reconcile when it calls 
> `FSDatasetImpl.checkAndUpdate()`, and there is nothing stopping blocks being 
> appended after they have been scanned from disk, but before they have been 
> compared with memory.
> My suspicion is that we can do all the comparison work outside of the lock 
> and checkAndUpdate() re-checks any differences later under the lock on a 
> block by block basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-15 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15415:
-
Attachment: HDFS-15415.004.patch

> Reduce locking in Datanode DirectoryScanner
> ---
>
> Key: HDFS-15415
> URL: https://issues.apache.org/jira/browse/HDFS-15415
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15415.001.patch, HDFS-15415.002.patch, 
> HDFS-15415.003.patch, HDFS-15415.004.patch
>
>
> In HDFS-15406, we have a small change to greatly reduce the runtime and 
> locking time of the datanode DirectoryScanner. They may be room for further 
> improvement.
> From the scan step, we have captured a snapshot of what is on disk. After 
> calling `dataset.getFinalizedBlocks(bpid);` we have taken a snapshot of in 
> memory. The two snapshots are never 100% in sync as things are always 
> changing as the disk is scanned.
> We are only comparing finalized blocks, so they should not really change:
> * If a block is deleted after our snapshot, our snapshot will not see it and 
> that is OK.
> * A finalized block could be appended. If that happens both the genstamp and 
> length will change, but that should be handled by reconcile when it calls 
> `FSDatasetImpl.checkAndUpdate()`, and there is nothing stopping blocks being 
> appended after they have been scanned from disk, but before they have been 
> compared with memory.
> My suspicion is that we can do all the comparison work outside of the lock 
> and checkAndUpdate() re-checks any differences later under the lock on a 
> block by block basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15578) Fix the rename issues with fallback fs enabled

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15578?focusedWorklogId=484557&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484557
 ]

ASF GitHub Bot logged work on HDFS-15578:
-

Author: ASF GitHub Bot
Created on: 15/Sep/20 15:20
Start Date: 15/Sep/20 15:20
Worklog Time Spent: 10m 
  Work Description: umamaheswararao edited a comment on pull request #2305:
URL: https://github.com/apache/hadoop/pull/2305#issuecomment-692781129


   @ayushtkn Thanks a lot for the review!
   
   >Just want to confirm, In rename, do we need to avoid any scenario like 
HDFS-15444? Or that is guarded already.
   
   I actually considered that case, the first time is without last component. 
If that resolve result indicates it as InternalDir, then we are again resolving 
with including lastComponent ( means with children ).  In this case if it again 
indicates internalDir or lastDirAsLink, then we are throwing exception as we 
can not rename to any internal dir/link path.
   
   
```
  InodeTree.ResolveResult resDst =
   fsState.resolve(getUriPath(dst), false);
   
   if (resDst.isInternalDir() && fsState.getRootFallbackLink() != null) {
 InodeTree.ResolveResult resDstWithLastComp =
 fsState.resolve(getUriPath(dst), true);
 // resolveLastComponent with true is to check if the target already
 // exist in internalDir/InternalDirLink itself.
 if (resDstWithLastComp.isInternalDir() || resDstWithLastComp
 .isLastInternalDirLink()) {
   throw readOnlyMountTable("rename", dst);
   ```
   
   There is one condition without fallback, we may attempt to rename. Probably 
we need else, as we are combining if check with fsState.getRootFallbackLink() 
!= null as well. I will update patch. Thanks 
   
   >Secondly, In resolve result, can we change the constructor directly?, or do 
we need to have an overloaded constructor to avoid compatibility issues, that 
might have saved couple of changes as well.
   
   Currently the InodeTree interface audience is Private. I don't believe any 
application holds class in separate jar. or serialize this class result to 
remote service where they will use different jar. Let me know in case if it's 
popping any  potential compatibility breakage.
   @InterfaceAudience.Private
   @InterfaceStability.Unstable
   abstract class InodeTree {



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484557)
Time Spent: 0.5h  (was: 20m)

> Fix the rename issues with fallback fs enabled
> --
>
> Key: HDFS-15578
> URL: https://issues.apache.org/jira/browse/HDFS-15578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When we enable fallback, rename should success if the src.parent or 
> dst.parent on inernalDir.
> {noformat}
> org.apache.hadoop.security.AccessControlException: InternalDir of 
> ViewFileSystem is readonly, operation rename not permitted on path 
> /newFileOnRoot.org.apache.hadoop.security.AccessControlException: InternalDir 
> of ViewFileSystem is readonly, operation rename not permitted on path 
> /newFileOnRoot.
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.readOnlyMountTable(ViewFileSystem.java:95)
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.readOnlyMountTable(ViewFileSystem.java:101)
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.rename(ViewFileSystem.java:683) at 
> org.apache.hadoop.hdfs.ViewDistributedFileSystem.rename(ViewDistributedFileSystem.java:533)
>  at 
> org.apache.hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks.verifyRename(TestViewDistributedFileSystemWithMountLinks.java:114)
>  at 
> org.apache.hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks.testRenameOnInternalDirWithFallback(TestViewDistributedFileSystemWithMountLinks.java:90)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.i

[jira] [Work logged] (HDFS-15578) Fix the rename issues with fallback fs enabled

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15578?focusedWorklogId=484555&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484555
 ]

ASF GitHub Bot logged work on HDFS-15578:
-

Author: ASF GitHub Bot
Created on: 15/Sep/20 15:17
Start Date: 15/Sep/20 15:17
Worklog Time Spent: 10m 
  Work Description: umamaheswararao edited a comment on pull request #2305:
URL: https://github.com/apache/hadoop/pull/2305#issuecomment-692781129


   @ayushtkn Thanks a lot for the review!
   
   >Just want to confirm, In rename, do we need to avoid any scenario like 
HDFS-15444? Or that is guarded already.
   
   I actually considered that case, the first time is without last component. 
If that resolve result indicates it as InternalDir, then we are again resolving 
with including lastComponent ( means with children ).  In this case if it again 
indicates internalDir or lastDirAsLink, then we are throwing exception as we 
can not rename to any internal dir/link path.
   
   
```
  InodeTree.ResolveResult resDst =
   fsState.resolve(getUriPath(dst), false);
   
   if (resDst.isInternalDir() && fsState.getRootFallbackLink() != null) {
 InodeTree.ResolveResult resDstWithLastComp =
 fsState.resolve(getUriPath(dst), true);
 // resolveLastComponent with true is to check if the target already
 // exist in internalDir/InternalDirLink itself.
 if (resDstWithLastComp.isInternalDir() || resDstWithLastComp
 .isLastInternalDirLink()) {
   throw readOnlyMountTable("rename", dst);
   ```
   
   There is one condition without fallback, we may attempt to rename. Probably 
we need else, as we are combining if check with fsState.getRootFallbackLink() 
!= null as well. I will update patch. Thanks 
   
   >Secondly, In resolve result, can we change the constructor directly?, or do 
we need to have an overloaded constructor to avoid compatibility issues, that 
might have saved couple of changes as well.
   Currently the InodeTree interface audience is Private. I don't believe any 
application holds class in separate jar. or serialize this class result to 
remote service where they will use different jar. Let me know in case if it's 
popping any  potential compatibility breakage.
   @InterfaceAudience.Private
   @InterfaceStability.Unstable
   abstract class InodeTree {



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484555)
Time Spent: 20m  (was: 10m)

> Fix the rename issues with fallback fs enabled
> --
>
> Key: HDFS-15578
> URL: https://issues.apache.org/jira/browse/HDFS-15578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When we enable fallback, rename should success if the src.parent or 
> dst.parent on inernalDir.
> {noformat}
> org.apache.hadoop.security.AccessControlException: InternalDir of 
> ViewFileSystem is readonly, operation rename not permitted on path 
> /newFileOnRoot.org.apache.hadoop.security.AccessControlException: InternalDir 
> of ViewFileSystem is readonly, operation rename not permitted on path 
> /newFileOnRoot.
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.readOnlyMountTable(ViewFileSystem.java:95)
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.readOnlyMountTable(ViewFileSystem.java:101)
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.rename(ViewFileSystem.java:683) at 
> org.apache.hadoop.hdfs.ViewDistributedFileSystem.rename(ViewDistributedFileSystem.java:533)
>  at 
> org.apache.hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks.verifyRename(TestViewDistributedFileSystemWithMountLinks.java:114)
>  at 
> org.apache.hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks.testRenameOnInternalDirWithFallback(TestViewDistributedFileSystemWithMountLinks.java:90)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.interna

[jira] [Updated] (HDFS-15539) When disallowing snapshot on a dir, throw exception if its trash root is not empty

2020-09-15 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15539:
--
Fix Version/s: 3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> When disallowing snapshot on a dir, throw exception if its trash root is not 
> empty
> --
>
> Key: HDFS-15539
> URL: https://issues.apache.org/jira/browse/HDFS-15539
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> When snapshot is disallowed on a dir, {{getTrashRoots()}} won't return the 
> trash root in that dir anymore (if any). The risk is the trash root will be 
> left there forever.
> We need to throw an exception there and prompt the user to clean up or rename 
> the trash root if it is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15578) Fix the rename issues with fallback fs enabled

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15578?focusedWorklogId=484551&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484551
 ]

ASF GitHub Bot logged work on HDFS-15578:
-

Author: ASF GitHub Bot
Created on: 15/Sep/20 15:12
Start Date: 15/Sep/20 15:12
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on pull request #2305:
URL: https://github.com/apache/hadoop/pull/2305#issuecomment-692781129


   @ayushtkn Thanks a lot for the review!
   
   >Just want to confirm, In rename, do we need to avoid any scenario like 
HDFS-15444? Or that is guarded already.
   
   I actually considered that case, the first time is without last component. 
If that resolve result indicates it as InternalDir, then we are again resolving 
with including lastComponent ( means with children ).  In this case if it again 
indicates internalDir or lastDirAsLink, then we are throwing exception as we 
can not rename to any internal dir/link path.
   
   
```
  InodeTree.ResolveResult resDst =
   fsState.resolve(getUriPath(dst), false);
   
   if (resDst.isInternalDir() && fsState.getRootFallbackLink() != null) {
 InodeTree.ResolveResult resDstWithLastComp =
 fsState.resolve(getUriPath(dst), true);
 // resolveLastComponent with true is to check if the target already
 // exist in internalDir/InternalDirLink itself.
 if (resDstWithLastComp.isInternalDir() || resDstWithLastComp
 .isLastInternalDirLink()) {
   throw readOnlyMountTable("rename", dst);
   ```
   
   There is one condition without fallback, we may attempt to rename. Probably 
we need else, as we are combining if check with fsState.getRootFallbackLink() 
!= null as well. I will update patch. Thanks 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484551)
Remaining Estimate: 0h
Time Spent: 10m

> Fix the rename issues with fallback fs enabled
> --
>
> Key: HDFS-15578
> URL: https://issues.apache.org/jira/browse/HDFS-15578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we enable fallback, rename should success if the src.parent or 
> dst.parent on inernalDir.
> {noformat}
> org.apache.hadoop.security.AccessControlException: InternalDir of 
> ViewFileSystem is readonly, operation rename not permitted on path 
> /newFileOnRoot.org.apache.hadoop.security.AccessControlException: InternalDir 
> of ViewFileSystem is readonly, operation rename not permitted on path 
> /newFileOnRoot.
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.readOnlyMountTable(ViewFileSystem.java:95)
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.readOnlyMountTable(ViewFileSystem.java:101)
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.rename(ViewFileSystem.java:683) at 
> org.apache.hadoop.hdfs.ViewDistributedFileSystem.rename(ViewDistributedFileSystem.java:533)
>  at 
> org.apache.hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks.verifyRename(TestViewDistributedFileSystemWithMountLinks.java:114)
>  at 
> org.apache.hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks.testRenameOnInternalDirWithFallback(TestViewDistributedFileSystemWithMountLinks.java:90)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runner

[jira] [Updated] (HDFS-15578) Fix the rename issues with fallback fs enabled

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15578:
--
Labels: pull-request-available  (was: )

> Fix the rename issues with fallback fs enabled
> --
>
> Key: HDFS-15578
> URL: https://issues.apache.org/jira/browse/HDFS-15578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we enable fallback, rename should success if the src.parent or 
> dst.parent on inernalDir.
> {noformat}
> org.apache.hadoop.security.AccessControlException: InternalDir of 
> ViewFileSystem is readonly, operation rename not permitted on path 
> /newFileOnRoot.org.apache.hadoop.security.AccessControlException: InternalDir 
> of ViewFileSystem is readonly, operation rename not permitted on path 
> /newFileOnRoot.
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.readOnlyMountTable(ViewFileSystem.java:95)
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.readOnlyMountTable(ViewFileSystem.java:101)
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.rename(ViewFileSystem.java:683) at 
> org.apache.hadoop.hdfs.ViewDistributedFileSystem.rename(ViewDistributedFileSystem.java:533)
>  at 
> org.apache.hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks.verifyRename(TestViewDistributedFileSystemWithMountLinks.java:114)
>  at 
> org.apache.hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks.testRenameOnInternalDirWithFallback(TestViewDistributedFileSystemWithMountLinks.java:90)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at 
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
>  at 
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
>  at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15237) Get checksum of EC file failed, when some block is missing or corrupt

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15237?focusedWorklogId=484552&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484552
 ]

ASF GitHub Bot logged work on HDFS-15237:
-

Author: ASF GitHub Bot
Created on: 15/Sep/20 15:12
Start Date: 15/Sep/20 15:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2306:
URL: https://github.com/apache/hadoop/pull/2306#issuecomment-692781731


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  8s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 15s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m  8s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 54s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  9s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  6s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m 15s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  1s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 39s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 55s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 15s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 116m 22s |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 36s |  The patch generated 1 ASF License 
warnings.  |
   |  |   | 205m 16s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2306/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2306 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ffde9fdef168 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f4ed9f3f911 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2306/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2306/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://ci-had

[jira] [Work logged] (HDFS-1558) Optimize FSNamesystem.startFileInternal

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1558?focusedWorklogId=484537&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484537
 ]

ASF GitHub Bot logged work on HDFS-1558:


Author: ASF GitHub Bot
Created on: 15/Sep/20 14:42
Start Date: 15/Sep/20 14:42
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #2305:
URL: https://github.com/apache/hadoop/pull/2305#issuecomment-692761784


   Thanx @umamaheswararao for the fix, I just had a very quick look.
   Just want to confirm, In rename, do we need to avoid any scenario like 
HDFS-15444? Or that is guarded already.
   Secondly, In resolve result, can we change the constructor directly?, or do 
we need to have an overloaded constructor to avoid compatibility issues, that 
might have saved couple of changes as well.
   
   The PR title has the wrong jira id, the last digit got missed. Do correct 
it, before pushing



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484537)
Time Spent: 20m  (was: 10m)

> Optimize FSNamesystem.startFileInternal
> ---
>
> Key: HDFS-1558
> URL: https://issues.apache.org/jira/browse/HDFS-1558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Reporter: Dmytro Molkov
>Assignee: Dmytro Molkov
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently on file creation inside of FSNamesystem.startFileInternal there are 
> three calls to FSDirectory that are essentially the same:
> dir.exists(src)
> dir.isDir(src)
> dir.getFileInode(src)
> All of them have to fetch the inode and then do some processing on it.
> If instead we were to fetch the inode once and then do all of the processing 
> on this INode object it would save us two trips through the namespace + 2 
> calls to normalizePath all of which are relatively expensive.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-15 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196185#comment-17196185
 ] 

Wei-Chiu Chuang commented on HDFS-15415:



{code}
  try (AutoCloseableLock lock = dataset.acquireDatasetReadLock()) {
bl = dataset.getSortedFinalizedBlocks(bpid);
  }
{code}

this lock may not be necessary? FsDatasetImpl#getSortedFinalizedBlocks() holds 
the same lock.

Other than that the rest LGTM

> Reduce locking in Datanode DirectoryScanner
> ---
>
> Key: HDFS-15415
> URL: https://issues.apache.org/jira/browse/HDFS-15415
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15415.001.patch, HDFS-15415.002.patch, 
> HDFS-15415.003.patch
>
>
> In HDFS-15406, we have a small change to greatly reduce the runtime and 
> locking time of the datanode DirectoryScanner. They may be room for further 
> improvement.
> From the scan step, we have captured a snapshot of what is on disk. After 
> calling `dataset.getFinalizedBlocks(bpid);` we have taken a snapshot of in 
> memory. The two snapshots are never 100% in sync as things are always 
> changing as the disk is scanned.
> We are only comparing finalized blocks, so they should not really change:
> * If a block is deleted after our snapshot, our snapshot will not see it and 
> that is OK.
> * A finalized block could be appended. If that happens both the genstamp and 
> length will change, but that should be handled by reconcile when it calls 
> `FSDatasetImpl.checkAndUpdate()`, and there is nothing stopping blocks being 
> appended after they have been scanned from disk, but before they have been 
> compared with memory.
> My suspicion is that we can do all the comparison work outside of the lock 
> and checkAndUpdate() re-checks any differences later under the lock on a 
> block by block basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196162#comment-17196162
 ] 

Hadoop QA commented on HDFS-15415:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 32m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
5s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m  4s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {c

[jira] [Comment Edited] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-15 Thread jianghua zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196150#comment-17196150
 ] 

jianghua zhu edited comment on HDFS-15516 at 9/15/20, 1:47 PM:
---

Hi [~elgoiri] , [~hexiaoqiao] , here I have submitted a new patch file 
(HDFS-15516.004.patch), can you review it?

When creating a file, add the audit log, for example:
 allowed=true ugi=zhujianghua (auth:SIMPLE) ip=/127.0.0.1 
cmd=create(options=[CREATE, OVERWRITE]) src=/debug.log dst=null 
perm=zhujianghua:supergroup:rw-r--r - proto=rpc

When appending a file, add the audit log, for example:
 allowed=true ugi=zhujianghua (auth:SIMPLE) ip=/127.0.0.1 
cmd=append(options=[APPEND]) src=/debug.log dst=null perm=null proto=rpc

When rename the file, the audit log:
allowed=true ugi=zhujianghua (auth:SIMPLE) ip=null cmd=rename (options=[NONE]) 
src=/testNamenodeRetryCache/testRename2/src 
dst=/testNamenodeRetryCache/testRename2/target 
perm=zhujianghua:supergroup:rwxrwxrwx proto=null

When creating a file and appending a file, the format of the added audit log 
has been consistent with the rename.

If you have different ideas, you can share them here.
 thank you very much.

 


was (Author: jianghuazhu):
Hi [~elgoiri] , [~hexiaoqiao] , here I have submitted a new patch file 
(HDFS-15516.004.patch), can you review it?

When creating a file, add the audit log, for example:
allowed=true ugi=zhujianghua (auth:SIMPLE) ip=/127.0.0.1 
cmd=create(options=[CREATE, OVERWRITE]) src=/debug.log dst=null 
perm=zhujianghua:supergroup:rw-r--r - proto=rpc

When appending a file, add the audit log, for example:
allowed=true ugi=zhujianghua (auth:SIMPLE) ip=/127.0.0.1 
cmd=append(options=[APPEND]) src=/debug.log dst=null perm=null proto=rpc

If you have different ideas, you can share them here.
thank you very much.

 

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch, HDFS-15516.004.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15576) Erasure Coding: Add rs and rs-legacy codec test for addPolicies

2020-09-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196158#comment-17196158
 ] 

Hadoop QA commented on HDFS-15576:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 29m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
49s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
25s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
45s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
49s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:gr

[jira] [Commented] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-15 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196144#comment-17196144
 ] 

Stephen O'Donnell commented on HDFS-15574:
--

I'm not sure - I think I got jenkins to run with that convention before. 
However I forgot to click the "submit patch" button. I've just clicked it now 
so lets give it a bit of time and see if it runs.

> Remove unnecessary sort of block list in DirectoryScanner
> -
>
> Key: HDFS-15574
> URL: https://issues.apache.org/jira/browse/HDFS-15574
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15574.001.patch, HDFS-15574.002.patch, 
> HDFS-15574.003.patch, HDFS-15574.branch-3.2.001.patch, 
> HDFS-15574.branch-3.3.001.patch
>
>
> These lines of code in DirectoryScanner#scan(), obtain a snapshot of the 
> finalized blocks from memory, and then sort them, under the DN lock. However 
> the blocks are stored in a sorted structure (FoldedTreeSet) and hence the 
> sort should be unnecessary.
> {code}
>   final List bl = dataset.getFinalizedBlocks(bpid);
>   Collections.sort(bl); // Sort based on blockId
> {code}
> This Jira removes the sort, and renames the getFinalizedBlocks to 
> getSortedFinalizedBlocks to make the intent of the method more clear.
> Also added a test, just in case the underlying block structure is ever 
> changed to something unsorted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-15 Thread Anonymous (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated HDFS-15574:
-
Status: Patch Available  (was: Reopened)

> Remove unnecessary sort of block list in DirectoryScanner
> -
>
> Key: HDFS-15574
> URL: https://issues.apache.org/jira/browse/HDFS-15574
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15574.001.patch, HDFS-15574.002.patch, 
> HDFS-15574.003.patch, HDFS-15574.branch-3.2.001.patch, 
> HDFS-15574.branch-3.3.001.patch
>
>
> These lines of code in DirectoryScanner#scan(), obtain a snapshot of the 
> finalized blocks from memory, and then sort them, under the DN lock. However 
> the blocks are stored in a sorted structure (FoldedTreeSet) and hence the 
> sort should be unnecessary.
> {code}
>   final List bl = dataset.getFinalizedBlocks(bpid);
>   Collections.sort(bl); // Sort based on blockId
> {code}
> This Jira removes the sort, and renames the getFinalizedBlocks to 
> getSortedFinalizedBlocks to make the intent of the method more clear.
> Also added a test, just in case the underlying block structure is ever 
> changed to something unsorted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-15 Thread jianghua zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196150#comment-17196150
 ] 

jianghua zhu commented on HDFS-15516:
-

Hi [~elgoiri] , [~hexiaoqiao] , here I have submitted a new patch file 
(HDFS-15516.004.patch), can you review it?

When creating a file, add the audit log, for example:
allowed=true ugi=zhujianghua (auth:SIMPLE) ip=/127.0.0.1 
cmd=create(options=[CREATE, OVERWRITE]) src=/debug.log dst=null 
perm=zhujianghua:supergroup:rw-r--r - proto=rpc

When appending a file, add the audit log, for example:
allowed=true ugi=zhujianghua (auth:SIMPLE) ip=/127.0.0.1 
cmd=append(options=[APPEND]) src=/debug.log dst=null perm=null proto=rpc

If you have different ideas, you can share them here.
thank you very much.

 

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch, HDFS-15516.004.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-15 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196140#comment-17196140
 ] 

Hemanth Boyina commented on HDFS-15574:
---

tried to cherry pick to branch 3.3 , due to conflicts couldn't backported 

thanks for submitting patches for branch 3.3. and 3.2 , i think jenkins didn't 
triggered the build , [~sodonnell] isn't the naming convention should be 
something like HDFS-15574-branch-3.3-01.patch

> Remove unnecessary sort of block list in DirectoryScanner
> -
>
> Key: HDFS-15574
> URL: https://issues.apache.org/jira/browse/HDFS-15574
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15574.001.patch, HDFS-15574.002.patch, 
> HDFS-15574.003.patch, HDFS-15574.branch-3.2.001.patch, 
> HDFS-15574.branch-3.3.001.patch
>
>
> These lines of code in DirectoryScanner#scan(), obtain a snapshot of the 
> finalized blocks from memory, and then sort them, under the DN lock. However 
> the blocks are stored in a sorted structure (FoldedTreeSet) and hence the 
> sort should be unnecessary.
> {code}
>   final List bl = dataset.getFinalizedBlocks(bpid);
>   Collections.sort(bl); // Sort based on blockId
> {code}
> This Jira removes the sort, and renames the getFinalizedBlocks to 
> getSortedFinalizedBlocks to make the intent of the method more clear.
> Also added a test, just in case the underlying block structure is ever 
> changed to something unsorted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12090) Handling writes from HDFS to Provided storages

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-12090:
--
Labels: pull-request-available  (was: )

> Handling writes from HDFS to Provided storages
> --
>
> Key: HDFS-12090
> URL: https://issues.apache.org/jira/browse/HDFS-12090
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Virajith Jalaparti
>Priority: Major
>  Labels: pull-request-available
> Attachments: External-SyncService-CreateFile.001.png, 
> HDFS-12090-Functional-Specification.001.pdf, 
> HDFS-12090-Functional-Specification.002.pdf, 
> HDFS-12090-Functional-Specification.003.pdf, HDFS-12090-design.001.pdf, 
> HDFS-12090..patch, HDFS-12090.0001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDFS-9806 introduces the concept of {{PROVIDED}} storage, which makes data in 
> external storage systems accessible through HDFS. However, HDFS-9806 is 
> limited to data being read through HDFS. This JIRA will deal with how data 
> can be written to such {{PROVIDED}} storages from HDFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-12090) Handling writes from HDFS to Provided storages

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12090?focusedWorklogId=484466&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484466
 ]

ASF GitHub Bot logged work on HDFS-12090:
-

Author: ASF GitHub Bot
Created on: 15/Sep/20 13:00
Start Date: 15/Sep/20 13:00
Worklog Time Spent: 10m 
  Work Description: qboileau commented on pull request #1726:
URL: https://github.com/apache/hadoop/pull/1726#issuecomment-692697722


   Hi @ehiggs @virajith,  any news on the state of sync service and HDFS-12090?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484466)
Remaining Estimate: 0h
Time Spent: 10m

> Handling writes from HDFS to Provided storages
> --
>
> Key: HDFS-12090
> URL: https://issues.apache.org/jira/browse/HDFS-12090
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Virajith Jalaparti
>Priority: Major
> Attachments: External-SyncService-CreateFile.001.png, 
> HDFS-12090-Functional-Specification.001.pdf, 
> HDFS-12090-Functional-Specification.002.pdf, 
> HDFS-12090-Functional-Specification.003.pdf, HDFS-12090-design.001.pdf, 
> HDFS-12090..patch, HDFS-12090.0001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDFS-9806 introduces the concept of {{PROVIDED}} storage, which makes data in 
> external storage systems accessible through HDFS. However, HDFS-9806 is 
> limited to data being read through HDFS. This JIRA will deal with how data 
> can be written to such {{PROVIDED}} storages from HDFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15516?focusedWorklogId=484461&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484461
 ]

ASF GitHub Bot logged work on HDFS-15516:
-

Author: ASF GitHub Bot
Created on: 15/Sep/20 12:37
Start Date: 15/Sep/20 12:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2281:
URL: https://github.com/apache/hadoop/pull/2281#issuecomment-692686150


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 17s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 19s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 12s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  7s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 23s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 21s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m 16s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 12s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m  3s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   4m 20s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 130m 12s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 224m 47s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
   |   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2281/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2281 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fdc060869e78 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f4ed9f3f911 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | 

[jira] [Commented] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196097#comment-17196097
 ] 

Hadoop QA commented on HDFS-15516:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 27m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
58s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 44s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings.

[jira] [Commented] (HDFS-15576) Erasure Coding: Add rs and rs-legacy codec test for addPolicies

2020-09-15 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196095#comment-17196095
 ] 

Takanobu Asanuma commented on HDFS-15576:
-

Thanks for updating the patch. +1 on [^HDFS-15576.002.patch], pending Jenkins.

> Erasure Coding: Add rs and rs-legacy codec test for addPolicies
> ---
>
> Key: HDFS-15576
> URL: https://issues.apache.org/jira/browse/HDFS-15576
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15576.001.patch, HDFS-15576.002.patch
>
>
> * Add rs and rs-legacy codec test for  TestErasureCodingCLI
> * Add comments for failed test RS
> * Modify UT, change "RS" to "rs", because "RS" is not supported 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15237) Get checksum of EC file failed, when some block is missing or corrupt

2020-09-15 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196090#comment-17196090
 ] 

Zhao Yi Ming commented on HDFS-15237:
-

[~zhengchenyu] Thanks! We will port those 2 issues into our env.

> Get checksum of EC file failed, when some block is missing or corrupt
> -
>
> Key: HDFS-15237
> URL: https://issues.apache.org/jira/browse/HDFS-15237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2
>
> Attachments: HDFS-15237.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we distcp from an ec directory to another one, I found some error like 
> this.
> {code}
> 2020-03-20 20:18:21,366 WARN [main] 
> org.apache.hadoop.hdfs.FileChecksumHelper: src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]2020-03-20
>  20:18:21,366 WARN [main] org.apache.hadoop.hdfs.FileChecksumHelper: 
> src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]java.io.EOFException:
>  Unexpected EOF while trying to read response from server at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:550)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.tryDatanode(FileChecksumHelper.java:709)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlockGroup(FileChecksumHelper.java:664)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:638)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252)
>  at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1790) 
> at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1810)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1691)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1688)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1700)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:138)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:259)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:220) at 
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}
> And Then I found some error in datanode like this
> {code}
> 2020-03-20 20:54:16,573 INFO 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: 
> SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = 
> false
> 2020-03-20 20:54:16,577 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> bd-hadoop-128050.zeus.lianjia.com:9866:DataXceiver error processing 
> BLOCK_GROUP_CHECKSUM operation src: /10.201.1.38:33264 dst: 
> /10.200.128.50:9866
> java.lang.UnsupportedOperationException
>  at java.nio.ByteBuffer.array(ByteBuffer.java:994)
>  at 
> org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockChecksumReconstructor.reconstruct(StripedBlockChecksumReconstructor.java:90)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockChecksumHelper$BlockGroupNonStripedChecksumComputer.recalculateChecksum(BlockChecksumHelper.java:711)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockChecksumHelper$BlockGroupNonStripedChecksumComputer.compute(BlockChecksumHelper.java:489)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.blockGroupChecksum(DataXceiver.java:1047)
>  at 
> org.apa

[jira] [Comment Edited] (HDFS-15237) Get checksum of EC file failed, when some block is missing or corrupt

2020-09-15 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196087#comment-17196087
 ] 

Zhao Yi Ming edited comment on HDFS-15237 at 9/15/20, 11:59 AM:


[~zhengchenyu] Thanks for your patch! It can fix this issue. I also hit this 
problem on 3.1.1 env.

[~zhengchenyu] [~ayushtkn]   I combined my little changes and extend the UT 
with zhengchenyu fix, please review it. Thanks!

The root cause is when we enabled the ISA-L, it will use the rs native codec 
and decode which will introduce the direct buffer, but for the rs java , it 
will not introduce the direct buffer, and the direct buffer is not support the 
_buffer.array()._  I extended the UT and simulate the direct buffer and non 
direct buffer scenarios.


was (Author: zhaoyim):
[~zhengchenyu] Thanks for your patch! It can fix this issue. I also hit this 
problem on 3.1.1 env.

[~zhengchenyu] [~ayushtkn]   I combined my little changes and extend the UT 
with zhengchenyu fix, please review it. Thanks!

The root cause is when we enabled the ISA-L, it will use the rs native codec 
and decode which will use the direct buffer, but for the rs java , it will not 
use the direct buffer, and the direct buffer is not support the 
_buffer.array()._  I extend the UT and simulate the direct buffer and non 
direct buffer.

> Get checksum of EC file failed, when some block is missing or corrupt
> -
>
> Key: HDFS-15237
> URL: https://issues.apache.org/jira/browse/HDFS-15237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2
>
> Attachments: HDFS-15237.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we distcp from an ec directory to another one, I found some error like 
> this.
> {code}
> 2020-03-20 20:18:21,366 WARN [main] 
> org.apache.hadoop.hdfs.FileChecksumHelper: src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]2020-03-20
>  20:18:21,366 WARN [main] org.apache.hadoop.hdfs.FileChecksumHelper: 
> src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]java.io.EOFException:
>  Unexpected EOF while trying to read response from server at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:550)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.tryDatanode(FileChecksumHelper.java:709)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlockGroup(FileChecksumHelper.java:664)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:638)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252)
>  at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1790) 
> at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1810)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1691)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1688)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1700)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:138)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:259)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:220) at 
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}
> And Then I found some error in datanode like this
> {code}
> 2020-03-20 2

[jira] [Assigned] (HDFS-15237) Get checksum of EC file failed, when some block is missing or corrupt

2020-09-15 Thread zhengchenyu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu reassigned HDFS-15237:
--

Assignee: zhengchenyu

> Get checksum of EC file failed, when some block is missing or corrupt
> -
>
> Key: HDFS-15237
> URL: https://issues.apache.org/jira/browse/HDFS-15237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2
>
> Attachments: HDFS-15237.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we distcp from an ec directory to another one, I found some error like 
> this.
> {code}
> 2020-03-20 20:18:21,366 WARN [main] 
> org.apache.hadoop.hdfs.FileChecksumHelper: src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]2020-03-20
>  20:18:21,366 WARN [main] org.apache.hadoop.hdfs.FileChecksumHelper: 
> src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]java.io.EOFException:
>  Unexpected EOF while trying to read response from server at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:550)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.tryDatanode(FileChecksumHelper.java:709)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlockGroup(FileChecksumHelper.java:664)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:638)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252)
>  at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1790) 
> at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1810)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1691)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1688)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1700)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:138)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:259)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:220) at 
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}
> And Then I found some error in datanode like this
> {code}
> 2020-03-20 20:54:16,573 INFO 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: 
> SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = 
> false
> 2020-03-20 20:54:16,577 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> bd-hadoop-128050.zeus.lianjia.com:9866:DataXceiver error processing 
> BLOCK_GROUP_CHECKSUM operation src: /10.201.1.38:33264 dst: 
> /10.200.128.50:9866
> java.lang.UnsupportedOperationException
>  at java.nio.ByteBuffer.array(ByteBuffer.java:994)
>  at 
> org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockChecksumReconstructor.reconstruct(StripedBlockChecksumReconstructor.java:90)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockChecksumHelper$BlockGroupNonStripedChecksumComputer.recalculateChecksum(BlockChecksumHelper.java:711)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockChecksumHelper$BlockGroupNonStripedChecksumComputer.compute(BlockChecksumHelper.java:489)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.blockGroupChecksum(DataXceiver.java:1047)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opStripedBlockChecksum(Receiver.java:327)
>  

[jira] [Comment Edited] (HDFS-15237) Get checksum of EC file failed, when some block is missing or corrupt

2020-09-15 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196089#comment-17196089
 ] 

zhengchenyu edited comment on HDFS-15237 at 9/15/20, 11:59 AM:
---

[~zhaoyim] Yeah! This issue slove this problem. Also I advise you use 
HDFS-14754 . It will lower the possibility of this problem.


was (Author: zhengchenyu):
 Yeah! This issue slove this problem. Also I advise you use HDFS-14754 . It 
will lower the possibility of this problem.

> Get checksum of EC file failed, when some block is missing or corrupt
> -
>
> Key: HDFS-15237
> URL: https://issues.apache.org/jira/browse/HDFS-15237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2
>
> Attachments: HDFS-15237.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we distcp from an ec directory to another one, I found some error like 
> this.
> {code}
> 2020-03-20 20:18:21,366 WARN [main] 
> org.apache.hadoop.hdfs.FileChecksumHelper: src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]2020-03-20
>  20:18:21,366 WARN [main] org.apache.hadoop.hdfs.FileChecksumHelper: 
> src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]java.io.EOFException:
>  Unexpected EOF while trying to read response from server at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:550)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.tryDatanode(FileChecksumHelper.java:709)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlockGroup(FileChecksumHelper.java:664)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:638)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252)
>  at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1790) 
> at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1810)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1691)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1688)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1700)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:138)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:259)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:220) at 
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}
> And Then I found some error in datanode like this
> {code}
> 2020-03-20 20:54:16,573 INFO 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: 
> SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = 
> false
> 2020-03-20 20:54:16,577 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> bd-hadoop-128050.zeus.lianjia.com:9866:DataXceiver error processing 
> BLOCK_GROUP_CHECKSUM operation src: /10.201.1.38:33264 dst: 
> /10.200.128.50:9866
> java.lang.UnsupportedOperationException
>  at java.nio.ByteBuffer.array(ByteBuffer.java:994)
>  at 
> org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockChecksumReconstructor.reconstruct(StripedBlockChecksumReconstructor.java:90)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockChecksumHelper$BlockGroupNonStripedChecksumComputer.recalculateChecksum(BlockChecksumHelper.java:711)
>  at 
> org.apache.hadoop.hd

[jira] [Commented] (HDFS-15237) Get checksum of EC file failed, when some block is missing or corrupt

2020-09-15 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196089#comment-17196089
 ] 

zhengchenyu commented on HDFS-15237:


 Yeah! This issue slove this problem. Also I advise you use HDFS-14754 . It 
will lower the possibility of this problem.

> Get checksum of EC file failed, when some block is missing or corrupt
> -
>
> Key: HDFS-15237
> URL: https://issues.apache.org/jira/browse/HDFS-15237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2
>
> Attachments: HDFS-15237.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we distcp from an ec directory to another one, I found some error like 
> this.
> {code}
> 2020-03-20 20:18:21,366 WARN [main] 
> org.apache.hadoop.hdfs.FileChecksumHelper: src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]2020-03-20
>  20:18:21,366 WARN [main] org.apache.hadoop.hdfs.FileChecksumHelper: 
> src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]java.io.EOFException:
>  Unexpected EOF while trying to read response from server at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:550)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.tryDatanode(FileChecksumHelper.java:709)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlockGroup(FileChecksumHelper.java:664)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:638)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252)
>  at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1790) 
> at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1810)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1691)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1688)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1700)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:138)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:259)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:220) at 
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}
> And Then I found some error in datanode like this
> {code}
> 2020-03-20 20:54:16,573 INFO 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: 
> SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = 
> false
> 2020-03-20 20:54:16,577 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> bd-hadoop-128050.zeus.lianjia.com:9866:DataXceiver error processing 
> BLOCK_GROUP_CHECKSUM operation src: /10.201.1.38:33264 dst: 
> /10.200.128.50:9866
> java.lang.UnsupportedOperationException
>  at java.nio.ByteBuffer.array(ByteBuffer.java:994)
>  at 
> org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockChecksumReconstructor.reconstruct(StripedBlockChecksumReconstructor.java:90)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockChecksumHelper$BlockGroupNonStripedChecksumComputer.recalculateChecksum(BlockChecksumHelper.java:711)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockChecksumHelper$BlockGroupNonStripedChecksumComputer.compute(BlockChecksumHelper.java:489)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.blockGroupChecksum(DataXceiver.java:104

[jira] [Comment Edited] (HDFS-15237) Get checksum of EC file failed, when some block is missing or corrupt

2020-09-15 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196087#comment-17196087
 ] 

Zhao Yi Ming edited comment on HDFS-15237 at 9/15/20, 11:53 AM:


[~zhengchenyu] Thanks for your patch! It can fix this issue. I also hit this 
problem on 3.1.1 env.

[~zhengchenyu] [~ayushtkn]   I combined my little changes and extend the UT 
with zhengchenyu fix, please review it. Thanks!

The root cause is when we enabled the ISA-L, it will use the rs native codec 
and decode which will use the direct buffer, but for the rs java , it will not 
use the direct buffer, and the direct buffer is not support the 
_buffer.array()._  I extend the UT and simulate the direct buffer and non 
direct buffer.


was (Author: zhaoyim):
zhengchenyu  Thanks for your patch! It can fix this issue. I also hit this 
problem on 3.1.1 env.

zhengchenyu Ayush Saxena  I combined my little changes and extend the UT with 
zhengchenyu fix, please review it. Thanks!

The root cause is when we enabled the ISA-L, it will use the rs native codec 
and decode which will use the direct buffer, but for the rs java , it will not 
use the direct buffer, and the direct buffer is not support the 
_buffer.array()._  I extend the UT and simulate the direct buffer and non 
direct buffer.

> Get checksum of EC file failed, when some block is missing or corrupt
> -
>
> Key: HDFS-15237
> URL: https://issues.apache.org/jira/browse/HDFS-15237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2
>
> Attachments: HDFS-15237.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we distcp from an ec directory to another one, I found some error like 
> this.
> {code}
> 2020-03-20 20:18:21,366 WARN [main] 
> org.apache.hadoop.hdfs.FileChecksumHelper: src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]2020-03-20
>  20:18:21,366 WARN [main] org.apache.hadoop.hdfs.FileChecksumHelper: 
> src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]java.io.EOFException:
>  Unexpected EOF while trying to read response from server at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:550)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.tryDatanode(FileChecksumHelper.java:709)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlockGroup(FileChecksumHelper.java:664)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:638)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252)
>  at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1790) 
> at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1810)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1691)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1688)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1700)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:138)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:259)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:220) at 
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}
> And Then I found some error in datanode like this
> {code}
> 2020-03-20 20:54:16,573 INFO 
> org.apach

[jira] [Commented] (HDFS-15237) Get checksum of EC file failed, when some block is missing or corrupt

2020-09-15 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196087#comment-17196087
 ] 

Zhao Yi Ming commented on HDFS-15237:
-

zhengchenyu  Thanks for your patch! It can fix this issue.

zhengchenyu Ayush Saxena  I combined my little changes and extend the UT with 
zhengchenyu fix, please review it. Thanks!

The root cause is when we enabled the ISA-L, it will use the rs native codec 
and decode which will use the direct buffer, but for the rs java , it will not 
use the direct buffer, and the direct buffer is not support the 
_buffer.array()._  I extend the UT and simulate the direct buffer and non 
direct buffer.

> Get checksum of EC file failed, when some block is missing or corrupt
> -
>
> Key: HDFS-15237
> URL: https://issues.apache.org/jira/browse/HDFS-15237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2
>
> Attachments: HDFS-15237.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we distcp from an ec directory to another one, I found some error like 
> this.
> {code}
> 2020-03-20 20:18:21,366 WARN [main] 
> org.apache.hadoop.hdfs.FileChecksumHelper: src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]2020-03-20
>  20:18:21,366 WARN [main] org.apache.hadoop.hdfs.FileChecksumHelper: 
> src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]java.io.EOFException:
>  Unexpected EOF while trying to read response from server at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:550)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.tryDatanode(FileChecksumHelper.java:709)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlockGroup(FileChecksumHelper.java:664)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:638)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252)
>  at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1790) 
> at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1810)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1691)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1688)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1700)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:138)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:259)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:220) at 
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}
> And Then I found some error in datanode like this
> {code}
> 2020-03-20 20:54:16,573 INFO 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: 
> SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = 
> false
> 2020-03-20 20:54:16,577 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> bd-hadoop-128050.zeus.lianjia.com:9866:DataXceiver error processing 
> BLOCK_GROUP_CHECKSUM operation src: /10.201.1.38:33264 dst: 
> /10.200.128.50:9866
> java.lang.UnsupportedOperationException
>  at java.nio.ByteBuffer.array(ByteBuffer.java:994)
>  at 
> org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockChecksumReconstructor.reconstruct(StripedBlockChecksumReconstructor.java:90)
>  at 
> org.apache.

[jira] [Comment Edited] (HDFS-15237) Get checksum of EC file failed, when some block is missing or corrupt

2020-09-15 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196087#comment-17196087
 ] 

Zhao Yi Ming edited comment on HDFS-15237 at 9/15/20, 11:52 AM:


zhengchenyu  Thanks for your patch! It can fix this issue. I also hit this 
problem on 3.1.1 env.

zhengchenyu Ayush Saxena  I combined my little changes and extend the UT with 
zhengchenyu fix, please review it. Thanks!

The root cause is when we enabled the ISA-L, it will use the rs native codec 
and decode which will use the direct buffer, but for the rs java , it will not 
use the direct buffer, and the direct buffer is not support the 
_buffer.array()._  I extend the UT and simulate the direct buffer and non 
direct buffer.


was (Author: zhaoyim):
zhengchenyu  Thanks for your patch! It can fix this issue.

zhengchenyu Ayush Saxena  I combined my little changes and extend the UT with 
zhengchenyu fix, please review it. Thanks!

The root cause is when we enabled the ISA-L, it will use the rs native codec 
and decode which will use the direct buffer, but for the rs java , it will not 
use the direct buffer, and the direct buffer is not support the 
_buffer.array()._  I extend the UT and simulate the direct buffer and non 
direct buffer.

> Get checksum of EC file failed, when some block is missing or corrupt
> -
>
> Key: HDFS-15237
> URL: https://issues.apache.org/jira/browse/HDFS-15237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2
>
> Attachments: HDFS-15237.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we distcp from an ec directory to another one, I found some error like 
> this.
> {code}
> 2020-03-20 20:18:21,366 WARN [main] 
> org.apache.hadoop.hdfs.FileChecksumHelper: src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]2020-03-20
>  20:18:21,366 WARN [main] org.apache.hadoop.hdfs.FileChecksumHelper: 
> src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]java.io.EOFException:
>  Unexpected EOF while trying to read response from server at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:550)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.tryDatanode(FileChecksumHelper.java:709)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlockGroup(FileChecksumHelper.java:664)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:638)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252)
>  at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1790) 
> at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1810)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1691)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1688)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1700)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:138)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:259)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:220) at 
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}
> And Then I found some error in datanode like this
> {code}
> 2020-03-20 20:54:16,573 INFO 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.Sas

[jira] [Updated] (HDFS-15237) Get checksum of EC file failed, when some block is missing or corrupt

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15237:
--
Labels: pull-request-available  (was: )

> Get checksum of EC file failed, when some block is missing or corrupt
> -
>
> Key: HDFS-15237
> URL: https://issues.apache.org/jira/browse/HDFS-15237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2
>
> Attachments: HDFS-15237.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we distcp from an ec directory to another one, I found some error like 
> this.
> {code}
> 2020-03-20 20:18:21,366 WARN [main] 
> org.apache.hadoop.hdfs.FileChecksumHelper: src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]2020-03-20
>  20:18:21,366 WARN [main] org.apache.hadoop.hdfs.FileChecksumHelper: 
> src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]java.io.EOFException:
>  Unexpected EOF while trying to read response from server at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:550)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.tryDatanode(FileChecksumHelper.java:709)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlockGroup(FileChecksumHelper.java:664)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:638)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252)
>  at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1790) 
> at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1810)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1691)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1688)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1700)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:138)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:259)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:220) at 
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}
> And Then I found some error in datanode like this
> {code}
> 2020-03-20 20:54:16,573 INFO 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: 
> SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = 
> false
> 2020-03-20 20:54:16,577 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> bd-hadoop-128050.zeus.lianjia.com:9866:DataXceiver error processing 
> BLOCK_GROUP_CHECKSUM operation src: /10.201.1.38:33264 dst: 
> /10.200.128.50:9866
> java.lang.UnsupportedOperationException
>  at java.nio.ByteBuffer.array(ByteBuffer.java:994)
>  at 
> org.apache.hadoop.hdfs.server.datanode.erasurecode.StripedBlockChecksumReconstructor.reconstruct(StripedBlockChecksumReconstructor.java:90)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockChecksumHelper$BlockGroupNonStripedChecksumComputer.recalculateChecksum(BlockChecksumHelper.java:711)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockChecksumHelper$BlockGroupNonStripedChecksumComputer.compute(BlockChecksumHelper.java:489)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.blockGroupChecksum(DataXceiver.java:1047)
>  at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opStripedBlockChecksum(Receiver.java:327)
>  at 
> org.apache.h

[jira] [Work logged] (HDFS-15237) Get checksum of EC file failed, when some block is missing or corrupt

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15237?focusedWorklogId=48&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-48
 ]

ASF GitHub Bot logged work on HDFS-15237:
-

Author: ASF GitHub Bot
Created on: 15/Sep/20 11:46
Start Date: 15/Sep/20 11:46
Worklog Time Spent: 10m 
  Work Description: zhaoyim opened a new pull request #2306:
URL: https://github.com/apache/hadoop/pull/2306


   … or corrupt
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 48)
Remaining Estimate: 0h
Time Spent: 10m

> Get checksum of EC file failed, when some block is missing or corrupt
> -
>
> Key: HDFS-15237
> URL: https://issues.apache.org/jira/browse/HDFS-15237
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, hdfs
>Affects Versions: 3.2.1
>Reporter: zhengchenyu
>Priority: Major
> Fix For: 3.2.2
>
> Attachments: HDFS-15237.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we distcp from an ec directory to another one, I found some error like 
> this.
> {code}
> 2020-03-20 20:18:21,366 WARN [main] 
> org.apache.hadoop.hdfs.FileChecksumHelper: src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]2020-03-20
>  20:18:21,366 WARN [main] org.apache.hadoop.hdfs.FileChecksumHelper: 
> src=/EC/6-3//000325_0, 
> datanodes[6]=DatanodeInfoWithStorage[10.200.128.40:9866,DS-65ac4407-9d33-4c59-8f72-dd1d80d26d9f,DISK]java.io.EOFException:
>  Unexpected EOF while trying to read response from server at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:550)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.tryDatanode(FileChecksumHelper.java:709)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlockGroup(FileChecksumHelper.java:664)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:638)
>  at 
> org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252)
>  at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1790) 
> at 
> org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1810)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1691)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$33.doCall(DistributedFileSystem.java:1688)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1700)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:138)
>  at 
> org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:115)
>  at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
>  at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:259)
>  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:220) at 
> org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:48) at 
> org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
> org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}
> And Then I found some error in datanode like this
> {code}
> 2020-03-20 20:54:16,573 INFO 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient: 
> SASL encryption trust check: localHostTrus

[jira] [Created] (HDFS-15579) RBF: The constructor of PathLocation may got some misunderstanding

2020-09-15 Thread Janus Chow (Jira)
Janus Chow created HDFS-15579:
-

 Summary: RBF: The constructor of PathLocation may got some 
misunderstanding
 Key: HDFS-15579
 URL: https://issues.apache.org/jira/browse/HDFS-15579
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Reporter: Janus Chow


There is a constructor of PathLocation as follows, it's for creating a new 
PathLocation with a prioritised nsId. 

 
{code:java}
public PathLocation(PathLocation other, String firstNsId) {
  this.sourcePath = other.sourcePath;
  this.destOrder = other.destOrder;
  this.destinations = orderedNamespaces(other.destinations, firstNsId);
}
{code}
When I was reading the code of MultipleDestinationMountTableResolver, I thought 
this constructor was to create a PathLocation with an override destination. It 
took me a while before I realize this is a constructor to sort the destinations 
inside.

Maybe I think this constructor can be more clear about its usage?

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15415) Reduce locking in Datanode DirectoryScanner

2020-09-15 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15415:
-
Attachment: HDFS-15415.003.patch

> Reduce locking in Datanode DirectoryScanner
> ---
>
> Key: HDFS-15415
> URL: https://issues.apache.org/jira/browse/HDFS-15415
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15415.001.patch, HDFS-15415.002.patch, 
> HDFS-15415.003.patch
>
>
> In HDFS-15406, we have a small change to greatly reduce the runtime and 
> locking time of the datanode DirectoryScanner. They may be room for further 
> improvement.
> From the scan step, we have captured a snapshot of what is on disk. After 
> calling `dataset.getFinalizedBlocks(bpid);` we have taken a snapshot of in 
> memory. The two snapshots are never 100% in sync as things are always 
> changing as the disk is scanned.
> We are only comparing finalized blocks, so they should not really change:
> * If a block is deleted after our snapshot, our snapshot will not see it and 
> that is OK.
> * A finalized block could be appended. If that happens both the genstamp and 
> length will change, but that should be handled by reconcile when it calls 
> `FSDatasetImpl.checkAndUpdate()`, and there is nothing stopping blocks being 
> appended after they have been scanned from disk, but before they have been 
> compared with memory.
> My suspicion is that we can do all the comparison work outside of the lock 
> and checkAndUpdate() re-checks any differences later under the lock on a 
> block by block basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15576) Erasure Coding: Add rs and rs-legacy codec test for addPolicies

2020-09-15 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196006#comment-17196006
 ] 

Fei Hui commented on HDFS-15576:


Change the caption and upload v002 patch


> Erasure Coding: Add rs and rs-legacy codec test for addPolicies
> ---
>
> Key: HDFS-15576
> URL: https://issues.apache.org/jira/browse/HDFS-15576
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15576.001.patch, HDFS-15576.002.patch
>
>
> * Add rs and rs-legacy codec test for  TestErasureCodingCLI
> * Add comments for failed test RS
> * Modify UT, change "RS" to "rs", because "RS" is not supported 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15576) Erasure Coding: Add rs and rs-legacy codec test for addPolicies

2020-09-15 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-15576:
---
Description: 
* Add rs and rs-legacy codec test for  TestErasureCodingCLI
* Add comments for failed test RS
* Modify UT, change "RS" to "rs", because "RS" is not supported 


  was:
* Add rs and rs-legacy codec test for  TestErasureCodingCLI
* Add comments for failed test 
* Modify UT, change "RS" to "rs", because "RS" is not supported 



> Erasure Coding: Add rs and rs-legacy codec test for addPolicies
> ---
>
> Key: HDFS-15576
> URL: https://issues.apache.org/jira/browse/HDFS-15576
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15576.001.patch, HDFS-15576.002.patch
>
>
> * Add rs and rs-legacy codec test for  TestErasureCodingCLI
> * Add comments for failed test RS
> * Modify UT, change "RS" to "rs", because "RS" is not supported 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15576) Erasure Coding: Add rs and rs-legacy codec test for addPolicies

2020-09-15 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-15576:
---
Attachment: HDFS-15576.002.patch

> Erasure Coding: Add rs and rs-legacy codec test for addPolicies
> ---
>
> Key: HDFS-15576
> URL: https://issues.apache.org/jira/browse/HDFS-15576
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15576.001.patch, HDFS-15576.002.patch
>
>
> * Add rs and rs-legacy codec test for  TestErasureCodingCLI
> * Add comments for failed test RS
> * Modify UT, change "RS" to "rs", because "RS" is not supported 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15576) Erasure Coding: Add rs and rs-legacy codec test for addPolicies

2020-09-15 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-15576:
---
Description: 
* Add rs and rs-legacy codec test for  TestErasureCodingCLI
* Add comments for failed test 
* Modify UT, change "RS" to "rs", because "RS" is not supported 


  was:
* Add UT TestECAdmin#testAddPolicies
* Modify UT, change "RS" to "rs", because "RS" is not supported 



> Erasure Coding: Add rs and rs-legacy codec test for addPolicies
> ---
>
> Key: HDFS-15576
> URL: https://issues.apache.org/jira/browse/HDFS-15576
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15576.001.patch
>
>
> * Add rs and rs-legacy codec test for  TestErasureCodingCLI
> * Add comments for failed test 
> * Modify UT, change "RS" to "rs", because "RS" is not supported 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15576) Erasure Coding: Add rs and rs-legacy codec test for addPolicies

2020-09-15 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-15576:
---
Summary: Erasure Coding: Add rs and rs-legacy codec test for addPolicies  
(was: Erasure Coding: Add rs & rs-legacy codec test for addPolicies)

> Erasure Coding: Add rs and rs-legacy codec test for addPolicies
> ---
>
> Key: HDFS-15576
> URL: https://issues.apache.org/jira/browse/HDFS-15576
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15576.001.patch
>
>
> * Add UT TestECAdmin#testAddPolicies
> * Modify UT, change "RS" to "rs", because "RS" is not supported 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15576) Erasure Coding: Add rs & rs-legacy codec test for addPolicies

2020-09-15 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-15576:
---
Summary: Erasure Coding: Add rs & rs-legacy codec test for addPolicies  
(was: Erasure Coding: Add test addPolicies to ECAdmin)

> Erasure Coding: Add rs & rs-legacy codec test for addPolicies
> -
>
> Key: HDFS-15576
> URL: https://issues.apache.org/jira/browse/HDFS-15576
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15576.001.patch
>
>
> * Add UT TestECAdmin#testAddPolicies
> * Modify UT, change "RS" to "rs", because "RS" is not supported 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15516) Add info for create flags in NameNode audit logs

2020-09-15 Thread jianghua zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jianghua zhu updated HDFS-15516:

Attachment: HDFS-15516.004.patch

> Add info for create flags in NameNode audit logs
> 
>
> Key: HDFS-15516
> URL: https://issues.apache.org/jira/browse/HDFS-15516
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Shashikant Banerjee
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15516.001.patch, HDFS-15516.002.patch, 
> HDFS-15516.003.patch, HDFS-15516.004.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, if file create happens with flags like overwrite , the audit logs 
> doesn't seem to contain the info regarding the flags in the audit logs. It 
> would be useful to add info regarding the create options in the audit logs 
> similar to Rename ops. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15576) Erasure Coding: Add test addPolicies to ECAdmin

2020-09-15 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17195973#comment-17195973
 ] 

Fei Hui commented on HDFS-15576:


[~tasanuma] Thanks a lot, get it!
Will try it with your suggestions
In addition, I want to add comments for bellow codes from test_ec_policies.xml, 
it's just for failed test, because i had a mistake reference to it :(
{quote}
RS
{quote}

> Erasure Coding: Add test addPolicies to ECAdmin
> ---
>
> Key: HDFS-15576
> URL: https://issues.apache.org/jira/browse/HDFS-15576
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15576.001.patch
>
>
> * Add UT TestECAdmin#testAddPolicies
> * Modify UT, change "RS" to "rs", because "RS" is not supported 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-15 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15574:
-
Attachment: HDFS-15574.branch-3.2.001.patch

> Remove unnecessary sort of block list in DirectoryScanner
> -
>
> Key: HDFS-15574
> URL: https://issues.apache.org/jira/browse/HDFS-15574
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15574.001.patch, HDFS-15574.002.patch, 
> HDFS-15574.003.patch, HDFS-15574.branch-3.2.001.patch, 
> HDFS-15574.branch-3.3.001.patch
>
>
> These lines of code in DirectoryScanner#scan(), obtain a snapshot of the 
> finalized blocks from memory, and then sort them, under the DN lock. However 
> the blocks are stored in a sorted structure (FoldedTreeSet) and hence the 
> sort should be unnecessary.
> {code}
>   final List bl = dataset.getFinalizedBlocks(bpid);
>   Collections.sort(bl); // Sort based on blockId
> {code}
> This Jira removes the sort, and renames the getFinalizedBlocks to 
> getSortedFinalizedBlocks to make the intent of the method more clear.
> Also added a test, just in case the underlying block structure is ever 
> changed to something unsorted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-15 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15574:
-
Attachment: HDFS-15574.branch-3.3.001.patch

> Remove unnecessary sort of block list in DirectoryScanner
> -
>
> Key: HDFS-15574
> URL: https://issues.apache.org/jira/browse/HDFS-15574
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15574.001.patch, HDFS-15574.002.patch, 
> HDFS-15574.003.patch, HDFS-15574.branch-3.3.001.patch
>
>
> These lines of code in DirectoryScanner#scan(), obtain a snapshot of the 
> finalized blocks from memory, and then sort them, under the DN lock. However 
> the blocks are stored in a sorted structure (FoldedTreeSet) and hence the 
> sort should be unnecessary.
> {code}
>   final List bl = dataset.getFinalizedBlocks(bpid);
>   Collections.sort(bl); // Sort based on blockId
> {code}
> This Jira removes the sort, and renames the getFinalizedBlocks to 
> getSortedFinalizedBlocks to make the intent of the method more clear.
> Also added a test, just in case the underlying block structure is ever 
> changed to something unsorted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-15 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell reopened HDFS-15574:
--

Reopening to backport to earlier branches.

> Remove unnecessary sort of block list in DirectoryScanner
> -
>
> Key: HDFS-15574
> URL: https://issues.apache.org/jira/browse/HDFS-15574
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15574.001.patch, HDFS-15574.002.patch, 
> HDFS-15574.003.patch
>
>
> These lines of code in DirectoryScanner#scan(), obtain a snapshot of the 
> finalized blocks from memory, and then sort them, under the DN lock. However 
> the blocks are stored in a sorted structure (FoldedTreeSet) and hence the 
> sort should be unnecessary.
> {code}
>   final List bl = dataset.getFinalizedBlocks(bpid);
>   Collections.sort(bl); // Sort based on blockId
> {code}
> This Jira removes the sort, and renames the getFinalizedBlocks to 
> getSortedFinalizedBlocks to make the intent of the method more clear.
> Also added a test, just in case the underlying block structure is ever 
> changed to something unsorted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-15 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17195940#comment-17195940
 ] 

Stephen O'Donnell commented on HDFS-15574:
--

[~hemanthboyina] Thanks for committing this. I think we should commit this down 
the 3.x branches too. In Cloudera as customers upgrade to the Hadoop 3.x 
branches, we are seeing more clusters with locking problems around the block 
scanner.

There is a minor conflict cherry-picking this to branch-3.3 and earlier, so I 
will submit a new patch for branch-3.3 and probably another one for 3.2 and 3.1 
if there are further conflicts.

> Remove unnecessary sort of block list in DirectoryScanner
> -
>
> Key: HDFS-15574
> URL: https://issues.apache.org/jira/browse/HDFS-15574
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15574.001.patch, HDFS-15574.002.patch, 
> HDFS-15574.003.patch
>
>
> These lines of code in DirectoryScanner#scan(), obtain a snapshot of the 
> finalized blocks from memory, and then sort them, under the DN lock. However 
> the blocks are stored in a sorted structure (FoldedTreeSet) and hence the 
> sort should be unnecessary.
> {code}
>   final List bl = dataset.getFinalizedBlocks(bpid);
>   Collections.sort(bl); // Sort based on blockId
> {code}
> This Jira removes the sort, and renames the getFinalizedBlocks to 
> getSortedFinalizedBlocks to make the intent of the method more clear.
> Also added a test, just in case the underlying block structure is ever 
> changed to something unsorted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15576) Erasure Coding: Add test addPolicies to ECAdmin

2020-09-15 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17195936#comment-17195936
 ] 

Takanobu Asanuma commented on HDFS-15576:
-

Thanks for your patch, [~ferhui]. Thanks for pinging me, [~aajisaka].

About {{TestECAdmin#testAddPolicies()}}, we already have similar tests in 
TestErasureCodingCLI, which is the end-to-end test for ec commands.
 How about adding the schemas of RSk12m4/RS-legacyk12m4 to test_ec_policies.xml 
that TestErasureCodingCLI is using?

> Erasure Coding: Add test addPolicies to ECAdmin
> ---
>
> Key: HDFS-15576
> URL: https://issues.apache.org/jira/browse/HDFS-15576
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15576.001.patch
>
>
> * Add UT TestECAdmin#testAddPolicies
> * Modify UT, change "RS" to "rs", because "RS" is not supported 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15578) Fix the rename issues with fallback fs enabled

2020-09-15 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-15578:
---
Status: Patch Available  (was: Open)

> Fix the rename issues with fallback fs enabled
> --
>
> Key: HDFS-15578
> URL: https://issues.apache.org/jira/browse/HDFS-15578
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>
> When we enable fallback, rename should success if the src.parent or 
> dst.parent on inernalDir.
> {noformat}
> org.apache.hadoop.security.AccessControlException: InternalDir of 
> ViewFileSystem is readonly, operation rename not permitted on path 
> /newFileOnRoot.org.apache.hadoop.security.AccessControlException: InternalDir 
> of ViewFileSystem is readonly, operation rename not permitted on path 
> /newFileOnRoot.
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.readOnlyMountTable(ViewFileSystem.java:95)
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.readOnlyMountTable(ViewFileSystem.java:101)
>  at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.rename(ViewFileSystem.java:683) at 
> org.apache.hadoop.hdfs.ViewDistributedFileSystem.rename(ViewDistributedFileSystem.java:533)
>  at 
> org.apache.hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks.verifyRename(TestViewDistributedFileSystemWithMountLinks.java:114)
>  at 
> org.apache.hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks.testRenameOnInternalDirWithFallback(TestViewDistributedFileSystemWithMountLinks.java:90)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at 
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
>  at 
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
>  at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-1558) Optimize FSNamesystem.startFileInternal

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1558?focusedWorklogId=484331&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-484331
 ]

ASF GitHub Bot logged work on HDFS-1558:


Author: ASF GitHub Bot
Created on: 15/Sep/20 07:01
Start Date: 15/Sep/20 07:01
Worklog Time Spent: 10m 
  Work Description: umamaheswararao opened a new pull request #2305:
URL: https://github.com/apache/hadoop/pull/2305


   https://issues.apache.org/jira/browse/HDFS-15578



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 484331)
Remaining Estimate: 0h
Time Spent: 10m

> Optimize FSNamesystem.startFileInternal
> ---
>
> Key: HDFS-1558
> URL: https://issues.apache.org/jira/browse/HDFS-1558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Reporter: Dmytro Molkov
>Assignee: Dmytro Molkov
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently on file creation inside of FSNamesystem.startFileInternal there are 
> three calls to FSDirectory that are essentially the same:
> dir.exists(src)
> dir.isDir(src)
> dir.getFileInode(src)
> All of them have to fetch the inode and then do some processing on it.
> If instead we were to fetch the inode once and then do all of the processing 
> on this INode object it would save us two trips through the namespace + 2 
> calls to normalizePath all of which are relatively expensive.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1558) Optimize FSNamesystem.startFileInternal

2020-09-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-1558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-1558:
-
Labels: pull-request-available  (was: )

> Optimize FSNamesystem.startFileInternal
> ---
>
> Key: HDFS-1558
> URL: https://issues.apache.org/jira/browse/HDFS-1558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, performance
>Reporter: Dmytro Molkov
>Assignee: Dmytro Molkov
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently on file creation inside of FSNamesystem.startFileInternal there are 
> three calls to FSDirectory that are essentially the same:
> dir.exists(src)
> dir.isDir(src)
> dir.getFileInode(src)
> All of them have to fetch the inode and then do some processing on it.
> If instead we were to fetch the inode once and then do all of the processing 
> on this INode object it would save us two trips through the namespace + 2 
> calls to normalizePath all of which are relatively expensive.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org