[jira] [Created] (HDFS-16694) Fix missing package-info in hadoop-hdfs moudle.

2022-07-27 Thread fanshilun (Jira)
fanshilun created HDFS-16694:


 Summary: Fix missing package-info in hadoop-hdfs moudle.
 Key: HDFS-16694
 URL: https://issues.apache.org/jira/browse/HDFS-16694
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.4.0, 3.3.4
Reporter: fanshilun
Assignee: fanshilun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16695) Improve Code With Lambda in hadoop-hdfs module

2022-07-27 Thread ZanderXu (Jira)
ZanderXu created HDFS-16695:
---

 Summary: Improve Code With Lambda in hadoop-hdfs module
 Key: HDFS-16695
 URL: https://issues.apache.org/jira/browse/HDFS-16695
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: ZanderXu
Assignee: ZanderXu


Improve Code with Lambda in hadoop-hdfs module. 

For example:
Current logic:
{code:java}
public ListenableFuture getJournaledEdits(
  long fromTxnId, int maxTransactions) {
return parallelExecutor.submit(
new Callable() {
  @Override
  public GetJournaledEditsResponseProto call() throws IOException {
return getProxy().getJournaledEdits(journalId, nameServiceId,
fromTxnId, maxTransactions);
  }
});
  }
{code}

Improved Code with Lambda:
{code:java}
public ListenableFuture getJournaledEdits(
  long fromTxnId, int maxTransactions) {
return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
journalId, nameServiceId, fromTxnId, maxTransactions));
  }
{code}





--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16696) NameNode supports a new MsyncRPCServer to reduce the latency of msync() rpc

2022-07-27 Thread ZanderXu (Jira)
ZanderXu created HDFS-16696:
---

 Summary: NameNode supports a new MsyncRPCServer to reduce the 
latency of msync() rpc
 Key: HDFS-16696
 URL: https://issues.apache.org/jira/browse/HDFS-16696
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: ZanderXu
Assignee: ZanderXu


HDFS-12943 introduced Consistent Reads from Standby Node. It use msync 
mechanism to guarantee the consistency.  So the latency of msycn() rpc is very 
important, especially for some end users who need call msync() rpc every time.

Unfortunately, NameNode handle msync() RPCs same with other RPCs, also need 
enqueue, wait, handled. So the msync() will be blocked by other RPCs, such as 
setQuota, rename, delete, etc. 

So we need a new mechanism to guarantee the latency of the msync() RPC.
Such as: 
* We can supports a new MsyncRPCServer in NameNode to separately msync() RPC.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2022-07-27 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.TestRollingUpgrade 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/diff-compile-javac-root.txt
  [508K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/patch-mvnsite-root.txt
  [588K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/patch-javadoc-root.txt
  [40K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [244K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [432K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [128K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt
  [28K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/735/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.txt
  [16K]

   asflicense:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86

[jira] [Created] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2022-07-27 Thread Jingxuan Fu (Jira)
Jingxuan Fu created HDFS-16697:
--

 Summary: Randomly setting 
“dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode 
from being turned off
 Key: HDFS-16697
 URL: https://issues.apache.org/jira/browse/HDFS-16697
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.1.3
Reporter: Jingxuan Fu
Assignee: Jingxuan Fu
 Fix For: 3.1.3


 
{code:java}

  dfs.namenode.resource.checked.volumes.minimum
  1
  
    The minimum number of redundant NameNode storage volumes required.
  
{code}
 

We found that when setting the value of 
“dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
number of storage volumes in the NameNode, it is always impossible to turn off 
the safe mode, and when in safe mode, the file system only accepts read data 
requests, but not delete, modify and other change requests, which is greatly 
limited by the function.

The default value of the configuration item is 1, we set to 2 as an example for 
illustration, after starting hdfs logs and the client will throw the relevant 
reminders.

 
{code:java}
2022-07-27 17:37:31,772 WARN 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on available 
disk space. Already in safe mode.
2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
mode is ON.
Resources are low on NN. Please add or free up more resourcesthen turn off safe 
mode manually. NOTE:  If you turn off safe mode before adding resources, the NN 
will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to 
turn safe mode off.
{code}
 

 
{code:java}
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
directory /hdfsapi/test. Name node is in safe mode.
Resources are low on NN. Please add or free up more resourcesthen turn off safe 
mode manually. NOTE:  If you turn off safe mode before adding resources, the NN 
will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to 
turn safe mode off. NamenodeHostName:192.168.1.167
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
        at java.base/java.security.AccessController.doPrivileged(Native Method)
        at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916){code}
 

According to the prompt, it is believed that there is not enough resource space 
to meet the corresponding conditions to close safe mode, but after adding or 
releasing more resources and lowering the resource condition threshold 
"dfs.namenode.resource.du.reserved", it still fails to close safe mode and 
throws the same prompt .

According to the source code, we know that if the NameNode has redundant 
storage volumes less than the "dfs.namenode.resource.checked.volumes.minimum" 
set the minimum number of redundant storage volumes will enter safe mode. After 
debugging, *we found that the current NameNode storage volumes are abundant 
resource space, but because the total number of NameNode storage volumes is 
less than the set value, so the number of NameNode storage volumes with 
redundancy space must also be less than the set value, resulting in always 
entering safe mode.*

In summary, it is found that the configuration item lacks a condition check and 
an associated exception handling mechanism, which makes it impossible to find 
the root cause of the impact when a misconfiguration occurs.

The solution I propose is to use Precondition.checkArgument() to check the 
value of the configuration item and give a prompt when the value is greater 
than the number of  NameNode storage volumes to avoid the misconfiguration from 
affecting the subsequent operation of the program.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HDFS-16698) Add a metric to sense possible MaxDirectoryItemsExceededException in time.

2022-07-27 Thread ZanderXu (Jira)
ZanderXu created HDFS-16698:
---

 Summary: Add a metric to sense possible 
MaxDirectoryItemsExceededException in time.
 Key: HDFS-16698
 URL: https://issues.apache.org/jira/browse/HDFS-16698
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: ZanderXu
Assignee: ZanderXu


In our prod environment, we occasionally encounter 
MaxDirectoryItemsExceededException caused job failure.
{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
 The directory item limit of /user/XXX/.sparkStaging is exceeded: limit=1048576 
items=1048576
{code}

In order to avoid it, we add a metric to sense possible 
MaxDirectoryItemsExceededException in time. So that we can process it in time 
to avoid job failure.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2022-07-27 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/934/

[Jul 26, 2022, 3:51:37 AM] (noreply) YARN-11210. Fix YARN RMAdminCLI retry 
logic for non-retryable kerbero… (#4563)
[Jul 26, 2022, 7:41:22 PM] (noreply) HADOOP-17461. Collect thread-level 
IOStatistics. (#4352)




-1 overall


The following subsystems voted -1:
blanks pathlen xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/934/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/934/artifact/out/results-compile-javac-root.txt
 [540K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/934/artifact/out/blanks-eol.txt
 [14M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/934/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/934/artifact/out/results-checkstyle-root.txt
 [14M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/934/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/934/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/934/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/934/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/934/artifact/out/results-javadoc-javadoc-root.txt
 [400K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Resolved] (HDFS-16660) Improve Code With Lambda in IPCLoggerChannel class

2022-07-27 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-16660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HDFS-16660.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Improve Code With Lambda in IPCLoggerChannel class
> --
>
> Key: HDFS-16660
> URL: https://issues.apache.org/jira/browse/HDFS-16660
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Improve Code With Lambda in IPCLoggerChannel class



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2022-07-27 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/340/

[Jul 25, 2022, 9:39:25 AM] (noreply) HDFS-16655. OIV: print out erasure coding 
policy name in oiv Delimited output (#4541). Contributed by Max Xie.
[Jul 25, 2022, 5:05:45 PM] (noreply) YARN-11161. Support getAttributesToNodes, 
getClusterNodeAttributes, getNodesToAttributes API's for Federation (#4610)
[Jul 25, 2022, 5:25:38 PM] (noreply) HDFS-16681. Do not pass GCC flags for MSVC 
in libhdfspp (#4615)
[Jul 25, 2022, 6:38:59 PM] (noreply) MAPREDUCE-7372 MapReduce set permission 
too late in copyJar method (#4026). Contributed by Zhang Dongsheng.
[Jul 25, 2022, 6:55:40 PM] (noreply) YARN-10883. [Router] Router Audit Log Add 
Client IP Address. (#4426)
[Jul 25, 2022, 8:30:00 PM] (noreply) HDFS-16533. COMPOSITE_CRC failed between 
replicated file and striped file due to invalid requested length. (#4155)
[Jul 26, 2022, 3:51:37 AM] (noreply) YARN-11210. Fix YARN RMAdminCLI retry 
logic for non-retryable kerbero… (#4563)
[Jul 26, 2022, 7:41:22 PM] (noreply) HADOOP-17461. Collect thread-level 
IOStatistics. (#4352)




-1 overall


The following subsystems voted -1:
blanks pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory))
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory))
 Redundant null check at DataStorage.java:[line 695] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:[line 138] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:[line 75] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:[line 85] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$$PmemMappedRegion,,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.

[jira] [Resolved] (HDFS-16619) Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.

2022-07-27 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-16619.

Resolution: Fixed

> Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.
> ---
>
> Key: HDFS-16619
> URL: https://issues.apache.org/jira/browse/HDFS-16619
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Fix HttpHeaders.Values And HttpHeaders.Names 
> Deprecated.png
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use 
> HttpHeaderValues ​​and HttpHeaderNames instead.
> HttpHeaders.Names
> Deprecated. 
> Use HttpHeaderNames instead. Standard HTTP header names.
> {code:java}
> /** @deprecated */
> @Deprecated
> public static final class Names {
>   public static final String ACCEPT = "Accept";
>   public static final String ACCEPT_CHARSET = "Accept-Charset";
>   public static final String ACCEPT_ENCODING = "Accept-Encoding";
>   public static final String ACCEPT_LANGUAGE = "Accept-Language";
>   public static final String ACCEPT_RANGES = "Accept-Ranges";
>   public static final String ACCEPT_PATCH = "Accept-Patch";
>   public static final String ACCESS_CONTROL_ALLOW_CREDENTIALS = 
> "Access-Control-Allow-Credentials";
>   public static final String ACCESS_CONTROL_ALLOW_HEADERS = 
> "Access-Control-Allow-Headers"; {code}
> HttpHeaders.Values
> Deprecated. 
> Use HttpHeaderValues instead. Standard HTTP header values.
> {code:java}
> /** @deprecated */
> @Deprecated
> public static final class Values {
>   public static final String APPLICATION_JSON = "application/json";
>   public static final String APPLICATION_X_WWW_FORM_URLENCODED = 
> "application/x-www-form-urlencoded";
>   public static final String BASE64 = "base64";
>   public static final String BINARY = "binary";
>   public static final String BOUNDARY = "boundary";
>   public static final String BYTES = "bytes";
>   public static final String CHARSET = "charset";
>   public static final String CHUNKED = "chunked";
>   public static final String CLOSE = "close"; {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org