[jira] [Created] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters

2021-03-25 Thread zhuobin zheng (Jira)
zhuobin zheng created HDFS-15923:


 Summary: RBF:  Authentication failed when rename accross sub 
clusters
 Key: HDFS-15923
 URL: https://issues.apache.org/jira/browse/HDFS-15923
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Reporter: zhuobin zheng


Rename accross subcluster with RBF and Kerberos environment. Will encounter the 
following two errors:
 # Save Object to journal.
 # Precheck try to get src file status

So, we need use Proxy UGI doAs create DistcpProcedure and TrashProcedure and 
submit Job.

In patch i use proxy ugi doAs above method. It worked.

But there are another strange thing and this patch not solve:

Router use ugi itself to submit the Distcp job. But not user ugi or proxy ugi. 
This may cause excessive distcp permissions.


First: Save Object to journal.
{code:java}
// code placeholder
2021-03-23 14:01:16,233 WARN org.apache.hadoop.ipc.Client: Exception 
encountered while connecting to the server 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]
at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at 
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:408)
at 
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:622)
at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:413)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:822)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:818)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
at org.apache.hadoop.ipc.Client.call(Client.java:1452)
at org.apache.hadoop.ipc.Client.call(Client.java:1405)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
at com.sun.proxy.$Proxy11.create(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:376)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy12.create(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:277)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1240)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1219)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1201)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1139)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:533)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:544)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:471)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1125)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1105)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:994)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:982)
at 
org.apache.hadoop.tools.fedbalance.procedure.BalanceJournalInfoHDFS.saveJob(BalanceJournalInfoHDFS.java:89)
at 

Re: Please cherrypick to lower branches

2021-03-25 Thread tom lee
Hello,

Thank you for your advice. I will pay attention to this problem later.

Cheers,
Tom


Wei-Chiu Chuang  于2021年3月24日周三 下午2:30写道:

> Hello,
>
> Now that we're gradually switching to github PRs for code review, there
> appears to be a tendency to leave the commits in the trunk.
>
> I am sweeping through the commits in trunk and found a lot of goodies can
> be cherry picked without conflicts to branch-3.3.
>
> Unless it's something experimental, it would be a good idea to have them in
> lower branches so they can be adopted sooner.
>
> Cheers,
> Weichiu
>


Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2021-03-25 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/143/

[Mar 23, 2021 8:40:02 PM] (Ayush Saxena) HDFS-15907. Reduce Memory Overhead of 
AclFeature by avoiding AtomicInteger. Contributed by Stephen O'Donnell.
[Mar 23, 2021 9:06:26 PM] (noreply) HADOOP-17531. DistCp: Reduce memory usage 
on copying huge directories. (#2732). Contributed by Ayush Saxena.
[Mar 24, 2021 5:47:45 AM] (noreply) HDFS-15911 : Provide blocks moved count in 
Balancer iteration result (#2794)
[Mar 24, 2021 7:15:06 AM] (Peter Bacsko) YARN-10674. fs2cs should generate 
auto-created queue deletion properties. Contributed by Qi Zhu.
[Mar 24, 2021 8:51:35 AM] (Takanobu Asanuma) HDFS-15902. Improve the log for 
HTTPFS server operation. Contributed by Bhavik Patel.
[Mar 24, 2021 8:56:09 AM] (noreply) HDFS-15759. EC: Verify EC reconstruction 
correctness on DataNode (#2585)
[Mar 24, 2021 1:32:54 PM] (noreply) HADOOP-13551. AWS metrics wire-up (#2778)
[Mar 24, 2021 4:47:55 PM] (noreply) HADOOP-17476. 
ITestAssumeRole.testAssumeRoleBadInnerAuth failure. (#2777)
[Mar 24, 2021 5:52:33 PM] (noreply) HDFS-15918. Replace deprecated 
RAND_pseudo_bytes (#2811)
[Mar 25, 2021 4:24:14 AM] (Xiaoqiao He) HDFS-15919. BlockPoolManager should log 
stack trace if unable to get Namenode addresses. Contributed by Stephen 
O'Donnell.




-1 overall


The following subsystems voted -1:
blanks compile golang mvnsite pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 694] 
   
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts
 doesn't override java.util.ArrayList.equals(Object) At 
RollingWindowManager.java:At RollingWindowManager.java:[line 1] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 343] 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 356] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2021-03-25 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/457/

[Mar 24, 2021 5:47:45 AM] (noreply) HDFS-15911 : Provide blocks moved count in 
Balancer iteration result (#2794)
[Mar 24, 2021 7:15:06 AM] (Peter Bacsko) YARN-10674. fs2cs should generate 
auto-created queue deletion properties. Contributed by Qi Zhu.
[Mar 24, 2021 8:51:35 AM] (Takanobu Asanuma) HDFS-15902. Improve the log for 
HTTPFS server operation. Contributed by Bhavik Patel.
[Mar 24, 2021 8:56:09 AM] (noreply) HDFS-15759. EC: Verify EC reconstruction 
correctness on DataNode (#2585)
[Mar 24, 2021 1:32:54 PM] (noreply) HADOOP-13551. AWS metrics wire-up (#2778)
[Mar 24, 2021 4:47:55 PM] (noreply) HADOOP-17476. 
ITestAssumeRole.testAssumeRoleBadInnerAuth failure. (#2777)
[Mar 24, 2021 5:52:33 PM] (noreply) HDFS-15918. Replace deprecated 
RAND_pseudo_bytes (#2811)




-1 overall


The following subsystems voted -1:
blanks pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   
hadoop.security.authentication.server.TestJWTRedirectAuthenticationHandler 
   hadoop.hdfs.TestDFSStorageStateRecovery 
   hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots 
   hadoop.hdfs.client.impl.TestClientBlockVerification 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.client.impl.TestBlockReaderFactory 
   hadoop.hdfs.TestReadStripedFileWithDNFailure 
   hadoop.net.TestNetworkTopology 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.hdfs.client.impl.TestBlockReaderRemote 
   hadoop.tools.fedbalance.procedure.TestBalanceProcedureScheduler 
   hadoop.tools.fedbalance.TestDistCpProcedure 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.TestDynamometerInfra 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/457/artifact/out/results-compile-cc-root.txt
 [116K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/457/artifact/out/results-compile-javac-root.txt
 [368K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/457/artifact/out/blanks-eol.txt
 [13M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/457/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/457/artifact/out/results-checkstyle-root.txt
 [16M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/457/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/457/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/457/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/457/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/457/artifact/out/results-javadoc-javadoc-root.txt
 [1.1M]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/457/artifact/out/patch-unit-hadoop-common-project_hadoop-auth.txt
 [32K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/457/artifact/out/patch-unit-hadoop-common-project_hadoop-auth-examples.txt
 [0]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/457/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 [100K]
  

[jira] [Created] (HDFS-15922) Use memcpy for copying non-null terminated string in jni_helper.c

2021-03-25 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15922:
-

 Summary: Use memcpy for copying non-null terminated string in 
jni_helper.c
 Key: HDFS-15922
 URL: https://issues.apache.org/jira/browse/HDFS-15922
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


We currently get a warning while compiling HDFS native client -
{code}
[WARNING] inlined from 'wildcard_expandPath' at 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c:427:21,
[WARNING] /usr/include/x86_64-linux-gnu/bits/string_fortified.h:106:10: 
warning: '__builtin_strncpy' output truncated before terminating nul copying as 
many bytes from a string as its length [-Wstringop-truncation]
[WARNING] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c:402:43:
 note: length computed here
{code}

The scenario here is such that the copied string is deliberately not null 
terminated, since we want to insert a PATH_SEPARATOR ourselves. The warning 
reported by strncpy is valid, but not applicable in this scenario.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2021-03-25 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.yarn.client.TestFederationRMFailoverProxyProvider 
   hadoop.yarn.client.TestApplicationClientProtocolOnHA 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.tools.TestDistCpSystem 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/diff-compile-javac-root.txt
  [456K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/patch-mvnsite-root.txt
  [620K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/diff-patch-pylint.txt
  [48K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [204K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [272K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [116K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [60K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/248/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [20K]
   

[jira] [Created] (HDFS-15921) Improve the log for the Storage Policy Operations

2021-03-25 Thread Bhavik Patel (Jira)
Bhavik Patel created HDFS-15921:
---

 Summary: Improve the log for the Storage Policy Operations
 Key: HDFS-15921
 URL: https://issues.apache.org/jira/browse/HDFS-15921
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Bhavik Patel
Assignee: Bhavik Patel


Improve the log for the Storage Policy Operations



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15920) Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured

2021-03-25 Thread JiangHua Zhu (Jira)
JiangHua Zhu created HDFS-15920:
---

 Summary: Solve the problem that the value of 
SafeModeMonitor#RECHECK_INTERVAL can be configured
 Key: HDFS-15920
 URL: https://issues.apache.org/jira/browse/HDFS-15920
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: JiangHua Zhu


The current SafeModeMonitor#RECHECK_INTERVAL value has a fixed value (=1000), 
and this value should be set and configurable. Because the lock is occupied 
internally, it competes with other places.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org