[jira] [Resolved] (HDFS-16054) Replace Guava Lists usage by Hadoop's own Lists in hadoop-hdfs-project

2021-06-08 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HDFS-16054.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Replace Guava Lists usage by Hadoop's own Lists in hadoop-hdfs-project
> --
>
> Key: HDFS-16054
> URL: https://issues.apache.org/jira/browse/HDFS-16054
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2021-06-08 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/

[Jun 7, 2021 2:51:29 AM] (noreply) MAPREDUCE-7350. Replace Guava Lists usage by 
Hadoop's own Lists in hadoop-mapreduce-project (#3074)
[Jun 7, 2021 4:24:09 AM] (noreply) HADOOP-17743. Replace Guava Lists usage by 
Hadoop's own Lists in hadoop-common, hadoop-tools and cloud-storage projects 
(#3072)
[Jun 7, 2021 5:37:30 AM] (noreply) HDFS-16050. Some dynamometer tests fail. 
(#3079)




-1 overall


The following subsystems voted -1:
blanks pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.hdfs.TestRollingUpgrade 
   
hadoop.yarn.server.timelineservice.storage.common.TestHBaseTimelineStorageUtils 
   hadoop.yarn.server.router.clientrm.TestFederationClientInterceptor 
   hadoop.yarn.csi.client.TestCsiClient 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/artifact/out/results-compile-javac-root.txt
 [380K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/artifact/out/blanks-eol.txt
 [13M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/artifact/out/results-checkstyle-root.txt
 [16M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/artifact/out/results-javadoc-javadoc-root.txt
 [408K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 [544K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client.txt
 [24K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt
 [24K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/532/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt
 [20K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 3.3.1 RC3

2021-06-08 Thread Steve Loughran
+1, binding.

Awesome piece of work!

I've done three forms of qualification, all related to s3 and azure storage

   1. tarball validate, CLI use
   2. build/test of downstream modules off maven artifacts; mine and some
   other ASF ones. I  (and it its very much me) have broken some downstream
   modules tests, as I will discuss below. PRs submitted to the relevant
   projects
   3. local rerun of the hadoop-aws and hadoop-azure test suites


*Regarding issues which surfaced*

Wei-Chiu: can you register your private GPG key with the public keystores?
The gpg client apps let you do this? Then we can coordinate signing each
other's keys

Filed PRs for the test regressions:
https://github.com/apache/hbase-filesystem/pull/23
https://github.com/GoogleCloudDataproc/hadoop-connectors/pull/569

*Artifact validation*

SHA checksum good:


shasum -a 512 hadoop-3.3.1-RC3.tar.gz
b80e0a8785b0f3d75d9db54340123872e39bad72cc60de5d263ae22024720e6e824e022090f01e248bf105e03b0f06163729adbe15b5b0978bae0447571e22eb
 hadoop-3.3.1-RC3.tar.gz


GPG: trickier, because Wei-Chiu wasn't trusted

> gpg --verify hadoop-3.3.1-RC3.tar.gz.asc

gpg: assuming signed data in 'hadoop-3.3.1-RC3.tar.gz'
gpg: Signature made Tue Jun  1 11:00:41 2021 BST
gpg:using RSA key CD32D773FF41C3F9E74BDB7FB362E1C021854B9D
gpg: requesting key 0xB362E1C021854B9D from hkps server
hkps.pool.sks-keyservers.net
gpg: Can't check signature: No public key


*Wei-Chiu: can you add your public keys to the GPG key servers*

To validate the keys I went to the directory where I have our site under
svn (https://dist.apache.org/repos/dist/release/hadoop/common) , and, after
reinstalling svn (where did it go? when did it go?) did an svn update to
get the keys

Did a gpg import of the KEYS file, added

gpg: key 0x386D80EF81E7469A: public key "Brahma Reddy Battula (CODE SIGNING
KEY) " imported
gpg: key 0xFC8D04357BB49FF0: public key "Sammi Chen (CODE SIGNING KEY) <
sammic...@apache.org>" imported
gpg: key 0x36243EECE206BB0D: public key "Masatake Iwasaki (CODE SIGNING
KEY) " imported
*gpg: key 0xB362E1C021854B9D: public key "Wei-Chiu Chuang
>" imported*

This time an import did work, but Wei-Chiu isn't trusted by anyone yet

gpg --verify hadoop-3.3.1-RC3.tar.gz.asc
gpg: assuming signed data in 'hadoop-3.3.1-RC3.tar.gz'
gpg: Signature made Tue Jun  1 11:00:41 2021 BST
gpg:using RSA key CD32D773FF41C3F9E74BDB7FB362E1C021854B9D
gpg: Good signature from "Wei-Chiu Chuang " [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:  There is no indication that the signature belongs to the
owner.
Primary key fingerprint: CD32 D773 FF41 C3F9 E74B  DB7F B362 E1C0 2185 4B9D

(Wei-Chiu, let's coordinate signing each other's public keys via a slack
channel; you need to be in the apache web of trust)


> time gunzip hadoop-3.3.1-RC3.tar.gz

(5 seconds)

cd into the hadoop dir;
cp my confs in: cp ~/(somewhere)/hadoop-conf/*  etc/hadoop/
cp the hadoop-azure dependencies from share/hadoop/tools/lib/ to
share/hadoop/common/lib (products built targeting Azure put things there)

run: all the s3a "qualifying an AWS SDK update" commands
https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/testing.html#Qualifying_an_AWS_SDK_Update

run: basic abfs:// FS operations; again no problems.
FWIW I think we should consider having the hadoop-aws module and
dependencies, and the aws ones in hadoop-common/lib. I can get them there
through env vars and the s3guard shell sets things up, but azure is fiddly.

*Build and test cloudstore JAR; invoke from CLI*

This is my cloud-storage extension library
https://github.com/steveloughran/cloudstore

I've always intended to put it into hadoop, but as it is where a lot of
diagnostics and quick way to put together fixes "here's a faster du ("dux"")

https://github.com/steveloughran/cloudstore.git

modify the hadoop-3.3 profile to use 3.3.1 artifacts, then build with
snapshots enabled. Because I'd not (yet) built any 3.3.1 artifacts locally,
this fetched them from maven staging

mvn package -Phadoop-3.3 -Pextra -Psnapshots-and-staging


Set up env var $CLOUDSTORE to point to JAR; $BUCKET to s3a bucket, run
various commands (storediag, cloudup, ...). As an example, here's the "dux"
command, which is "hadoop fs -du" with parallel scan underneath the dir for
better scaling


bin/hadoop jar $CLOUDSTORE dux  -threads 64 -limit 1000 -verbose
s3a://stevel-london/

output is in
https://gist.github.com/steveloughran/664d30cef20f605f3164ad01f92a458a

*Build and (unit test) google GCS: *


Two test failures, one of which was classpath related and the other just a
new rename contract test needing a new setting in gs.xml to declare what
rename of file over file does.

Everything is covered in:
https://github.com/GoogleCloudDataproc/hadoop-connectors/pull/569

Classpath: assertJ not coming through hadoop-common-test JAR dependencies.

[ERROR]
com.google.cloud.hadoop.fs.gcs.contract.TestInMemoryGoogleContractR

Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2021-06-08 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/

[Jun 6, 2021 12:14:18 AM] (Akira Ajisaka) Fix container-executor




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.TestMultipleNNPortQOP 
   hadoop.hdfs.server.datanode.TestBlockRecovery 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.tools.TestDistCpSystem 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/diff-compile-javac-root.txt
  [496K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/patch-mvnsite-root.txt
  [824K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/diff-patch-pylint.txt
  [48K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [232K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
  [48K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [448K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [40K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [112K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [96K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/323/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-b

[jira] [Created] (HDFS-16058) If the balancer cannot update the datanode's storage type in time, the balancer will throw NullPointerException

2021-06-08 Thread lei w (Jira)
lei w created HDFS-16058:


 Summary: If the balancer cannot update the datanode's storage type 
in time, the balancer will throw NullPointerException
 Key: HDFS-16058
 URL: https://issues.apache.org/jira/browse/HDFS-16058
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer & mover
Reporter: lei w


The main logic of the balancer is to initialize the cluster information (number 
of datanodes, datanode's storage type and network topology etc)、calculate the 
source DataNode and target DataNode that need to move the block and then obtain 
the movable block through the getblocks(). Finally , move blocks. If a DataNode 
adds another type of storage and writes an EC block after the cluster 
information is initialized, and then returns the newly written EC block to 
balancer by getblocks(), it will cause balancer to throw NullPointerException 
.The main reason is that the balancer does not record the newly added storage 
type of the DataNode, which causes the current DataNode information not to be 
recorded when generate an object of type DBlockStriped . Finally, null is 
returned when the block of the current DataNode is obtained through the object 
of type DBlockStriped.
{code:java}
2021-04-23 19:38:21,233 WARN org.apache.hadoop.hdfs.server.balancer.Dispatcher: 
Dispatcher thread failed
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.chooseProxySource(Dispatcher.java:325)
at 
org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.markMovedIfGoodBlock(Dispatcher.java:291)
at 
org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.chooseBlockAndProxy(Dispatcher.java:271)
at 
org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$2500(Dispatcher.java:235)
at 
org.apache.hadoop.hdfs.server.balancer.Dispatcher$Source.chooseNextMove(Dispatcher.java:886)
at 
org.apache.hadoop.hdfs.server.balancer.Dispatcher$Source.dispatchBlocks(Dispatcher.java:943)
at 
org.apache.hadoop.hdfs.server.balancer.Dispatcher$Source.access$3200(Dispatcher.java:751)
at 
org.apache.hadoop.hdfs.server.balancer.Dispatcher$2.run(Dispatcher.java:1221)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Heads up! Merging HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet

2021-06-08 Thread Wei-Chiu Chuang
Hello,

Just want to make sure everyone interested in HDFS dev is aware
that HDFS-13671 (Namenode deletes large dir slowly caused by
FoldedTreeSet#removeAndGet) will soon to be merged.

The folded tree set data structure introduced in Hadoop 3 has big
performance regression when delete files. Since it touches the core of
HDFS, I thought it would be a good idea to send out the notice for a
broader audience.

PR: https://github.com/apache/hadoop/pull/3065


[jira] [Created] (HDFS-16057) Make sure the order for location in ENTERING_MAINTENANCE state

2021-06-08 Thread tomscut (Jira)
tomscut created HDFS-16057:
--

 Summary: Make sure the order for location in ENTERING_MAINTENANCE 
state
 Key: HDFS-16057
 URL: https://issues.apache.org/jira/browse/HDFS-16057
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: tomscut
Assignee: tomscut


We use compactor to sort locations in getBlockLocations(), and the expected 
result is: live -> stale -> entering_maintenance -> decommissioned.

But the networktopology. SortByDistance() will disrupt the order. We should 
also filtered out node in sate  AdminStates.ENTERING_MAINTENANCE before 
networktopology. SortByDistance().

 

org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager#sortLocatedBlock()
{code:java}
DatanodeInfoWithStorage[] di = lb.getLocations();
// Move decommissioned/stale datanodes to the bottom
Arrays.sort(di, comparator);

// Sort nodes by network distance only for located blocks
int lastActiveIndex = di.length - 1;
while (lastActiveIndex > 0 && isInactive(di[lastActiveIndex])) {
  --lastActiveIndex;
}
int activeLen = lastActiveIndex + 1;
if(nonDatanodeReader) {
  networktopology.sortByDistanceUsingNetworkLocation(client,
  lb.getLocations(), activeLen, createSecondaryNodeSorter());
} else {
  networktopology.sortByDistance(client, lb.getLocations(), activeLen,
  createSecondaryNodeSorter());
}
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16048) RBF: Print network topology on the router web

2021-06-08 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HDFS-16048.
-
Fix Version/s: 3.3.2
   3.4.0
   Resolution: Fixed

> RBF: Print network topology on the router web
> -
>
> Key: HDFS-16048
> URL: https://issues.apache.org/jira/browse/HDFS-16048
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
> Attachments: topology-json.jpg, topology-text.jpg
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> In order to query the network topology information conveniently, we can print 
> it on the router web. It's related to HDFS-15970.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16056) Can't start by resouceManager

2021-06-08 Thread JYXL (Jira)
JYXL created HDFS-16056:
---

 Summary: Can't start by resouceManager
 Key: HDFS-16056
 URL: https://issues.apache.org/jira/browse/HDFS-16056
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.3.0
 Environment: windows 10
Reporter: JYXL


When I use start-all.cmd, it can start namenode, datanode, nodemanager 
successfully, but cannot start resoucemanager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org