[jira] [Resolved] (HDFS-16173) Improve CopyCommands#Put#executor queue configurability

2021-08-26 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-16173.

Resolution: Fixed

> Improve CopyCommands#Put#executor queue configurability
> ---
>
> Key: HDFS-16173
> URL: https://issues.apache.org/jira/browse/HDFS-16173
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2, 3.2.4
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> In CopyCommands#Put, the number of executor queues is a fixed value, 1024.
> We should make him configurable, because there are different usage 
> environments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16188) Router to support resolving monitored namenodes with DNS

2021-08-26 Thread Leon Gao (Jira)
Leon Gao created HDFS-16188:
---

 Summary: Router to support resolving monitored namenodes with DNS
 Key: HDFS-16188
 URL: https://issues.apache.org/jira/browse/HDFS-16188
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
 Environment: We can use a DNS round-robin record to configure list of 
monitored namenodes, so we don't have to reconfigure everything namenode 
hostname is changed. For example, in containerized environment the hostname of 
namenode/observers can change pretty often.
Reporter: Leon Gao
Assignee: Leon Gao






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16187) SnapshotDiff behaviour with Xattrs and Acls is not consistent across NN restarts with checkpointing

2021-08-26 Thread Shashikant Banerjee (Jira)
Shashikant Banerjee created HDFS-16187:
--

 Summary: SnapshotDiff behaviour with Xattrs and Acls is not 
consistent across NN restarts with checkpointing
 Key: HDFS-16187
 URL: https://issues.apache.org/jira/browse/HDFS-16187
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Reporter: Srinivasu Majeti
Assignee: Shashikant Banerjee


The below test shows the snapshot diff between across snapshots is not 
consistent with Xattr(EZ here settinh the Xattr) across NN restarts with 
checkpointed FsImage.
{code:java}
@Test
public void testEncryptionZonesWithSnapshots() throws Exception {
  final Path snapshottable = new Path("/zones");
  fsWrapper.mkdir(snapshottable, FsPermission.getDirDefault(),
  true);
  dfsAdmin.allowSnapshot(snapshottable);
  dfsAdmin.createEncryptionZone(snapshottable, TEST_KEY, NO_TRASH);
  fs.createSnapshot(snapshottable, "snap1");
  SnapshotDiffReport report =
  fs.getSnapshotDiffReport(snapshottable, "snap1", "");
  Assert.assertEquals(0, report.getDiffList().size());
  report =
  fs.getSnapshotDiffReport(snapshottable, "snap1", "");
  System.out.println(report);
  Assert.assertEquals(0, report.getDiffList().size());
  fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER);
  fs.saveNamespace();
  fs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE);
  cluster.restartNameNode(true);
  report =
  fs.getSnapshotDiffReport(snapshottable, "snap1", "");
  Assert.assertEquals(0, report.getDiffList().size());
}{code}
{code:java}
Pre Restart:
Difference between snapshot snap1 and current directory under directory /zones:

Post Restart:
Difference between snapshot snap1 and current directory under directory /zones:
M .{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2021-08-26 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.hdfs.TestDecommission 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.tools.TestDistCpSystem 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.yarn.sls.TestSLSRunner 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/diff-compile-javac-root.txt
  [496K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/patch-mvnsite-root.txt
  [612K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/diff-patch-pylint.txt
  [48K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/diff-patch-shelldocs.txt
  [48K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/patch-javadoc-root.txt
  [64K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [232K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [432K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [40K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/402/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.txt
  [16K]
   

[jira] [Created] (HDFS-16186) Datanode kicks out hard disk logic optimization

2021-08-26 Thread yanbin.zhang (Jira)
yanbin.zhang created HDFS-16186:
---

 Summary: Datanode kicks out hard disk logic optimization
 Key: HDFS-16186
 URL: https://issues.apache.org/jira/browse/HDFS-16186
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.1.2
 Environment: In the hadoop cluster, a certain hard disk in a certain 
Datanode has a problem, but the datanode of hdfs did not kick out the hard disk 
in time, causing the datanode to become a slow node
Reporter: yanbin.zhang


2021-08-24 08:56:10,456 WARN datanode.DataNode 
(BlockSender.java:readChecksum(681)) - Could not read or failed to verify 
checksum for data at offset 113115136 for block 
BP-1801371083-x.x.x.x-1603704063698:blk_5635828768_4563943709
java.io.IOException: Input/output error
 at java.io.FileInputStream.readBytes(Native Method)
 at java.io.FileInputStream.read(FileInputStream.java:255)
 at 
org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream.read(FileIoProvider.java:876)
 at java.io.FilterInputStream.read(FilterInputStream.java:133)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
 at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
 at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
 at java.io.DataInputStream.read(DataInputStream.java:149)
 at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210)
 at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaInputStreams.readChecksumFully(ReplicaInputStreams.java:90)
 at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.readChecksum(BlockSender.java:679)
 at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:588)
 at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:803)
 at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:750)
 at 
org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:448)
 at 
org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:558)
 at 
org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633)
2021-08-24 08:56:11,121 WARN datanode.VolumeScanner 
(VolumeScanner.java:handle(292)) - Reporting bad 
BP-1801371083-x.x.x.x-1603704063698:blk_5635828768_4563943709 on 
/data11/hdfs/data



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org