[jira] [Commented] (HDFS-14181) Suspect there is a bug in NetworkTopology.java chooseRandom function.

2018-12-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731538#comment-16731538
 ] 

Hadoop QA commented on HDFS-14181:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 40 unchanged - 0 fixed = 41 total (was 40) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 24s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.ssl.TestSSLFactory |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14181 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12953411/0001-fix-NetworkTopology.java-chooseRandom-bug.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5b3d6deb2032 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / eee29ed |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25880/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 

[jira] [Commented] (HDFS-14181) Suspect there is a bug in NetworkTopology.java chooseRandom function.

2018-12-31 Thread Sihai Ke (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731528#comment-16731528
 ] 

Sihai Ke commented on HDFS-14181:
-

[~elgoiri] I have added another patch 
_0001-fix-NetworkTopology.java-chooseRandom-bug.patch_, could you help to have 
a look ?

> Suspect there is a bug in NetworkTopology.java chooseRandom function.
> -
>
> Key: HDFS-14181
> URL: https://issues.apache.org/jira/browse/HDFS-14181
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.9.2
>Reporter: Sihai Ke
>Priority: Major
> Attachments: 0001-add-UT-for-NetworkTopology.patch, 
> 0001-fix-NetworkTopology.java-chooseRandom-bug.patch, 
> image-2018-12-29-15-02-19-415.png
>
>
> During reading the hadoop NetworkTopology.java, I suspect there is a bug in 
> function 
> chooseRandom (line 498, hadoop version 2.9.2-RC0), 
>  I think there is a bug in{color:#f79232} code, ~excludedScope doesn't mean 
> availableNodes under Scope node, and I also add unit test for this and get an 
> exception.{color}
> bug code in the else.
> {code:java}
> // code placeholder
>  if (excludedScope == null) {
> availableNodes = countNumOfAvailableNodes(scope, excludedNodes);
>   } else {
> availableNodes =
> countNumOfAvailableNodes("~" + excludedScope, excludedNodes);
>   }{code}
> Source code:
> {code:java}
> // code placeholder
> protected Node chooseRandom(final String scope, String excludedScope,
> final Collection excludedNodes) {
>   if (excludedScope != null) {
> if (scope.startsWith(excludedScope)) {
>   return null;
> }
> if (!excludedScope.startsWith(scope)) {
>   excludedScope = null;
> }
>   }
>   Node node = getNode(scope);
>   if (!(node instanceof InnerNode)) {
> return excludedNodes != null && excludedNodes.contains(node) ?
> null : node;
>   }
>   InnerNode innerNode = (InnerNode)node;
>   int numOfDatanodes = innerNode.getNumOfLeaves();
>   if (excludedScope == null) {
> node = null;
>   } else {
> node = getNode(excludedScope);
> if (!(node instanceof InnerNode)) {
>   numOfDatanodes -= 1;
> } else {
>   numOfDatanodes -= ((InnerNode)node).getNumOfLeaves();
> }
>   }
>   if (numOfDatanodes <= 0) {
> LOG.debug("Failed to find datanode (scope=\"{}\" excludedScope=\"{}\")."
> + " numOfDatanodes={}",
> scope, excludedScope, numOfDatanodes);
> return null;
>   }
>   final int availableNodes;
>   if (excludedScope == null) {
> availableNodes = countNumOfAvailableNodes(scope, excludedNodes);
>   } else {
> availableNodes =
> countNumOfAvailableNodes("~" + excludedScope, excludedNodes);
>   }
>   LOG.debug("Choosing random from {} available nodes on node {},"
>   + " scope={}, excludedScope={}, excludeNodes={}. numOfDatanodes={}.",
>   availableNodes, innerNode, scope, excludedScope, excludedNodes,
>   numOfDatanodes);
>   Node ret = null;
>   if (availableNodes > 0) {
> ret = chooseRandom(innerNode, node, excludedNodes, numOfDatanodes,
> availableNodes);
>   }
>   LOG.debug("chooseRandom returning {}", ret);
>   return ret;
> }
> {code}
>  
>  
> Add Unit Test in TestClusterTopology.java, but get exception.
>  
> {code:java}
> // code placeholder
> @Test
> public void testChooseRandom1() {
>   // create the topology
>   NetworkTopology cluster = NetworkTopology.getInstance(new Configuration());
>   NodeElement node1 = getNewNode("node1", "/a1/b1/c1");
>   cluster.add(node1);
>   NodeElement node2 = getNewNode("node2", "/a1/b1/c1");
>   cluster.add(node2);
>   NodeElement node3 = getNewNode("node3", "/a1/b1/c2");
>   cluster.add(node3);
>   NodeElement node4 = getNewNode("node4", "/a1/b2/c3");
>   cluster.add(node4);
>   Node node = cluster.chooseRandom("/a1/b1", "/a1/b1/c1", null);
>   assertSame(node.getName(), "node3");
> }
> {code}
>  
> Exception:
> {code:java}
> // code placeholder
> java.lang.IllegalArgumentException: 1 should >= 2, and both should be 
> positive. 
> at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) 
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:567) 
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:544) 
> atorg.apache.hadoop.net.TestClusterTopology.testChooseRandom1(TestClusterTopology.java:198)
> {code}
>  
> {color:#f79232}!image-2018-12-29-15-02-19-415.png!{color}
>  
>  
> [~vagarychen] this change is imported in PR HDFS-11577, could you help to 
> check whether this is a bug ?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Updated] (HDFS-14181) Suspect there is a bug in NetworkTopology.java chooseRandom function.

2018-12-31 Thread Sihai Ke (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sihai Ke updated HDFS-14181:

Attachment: 0001-fix-NetworkTopology.java-chooseRandom-bug.patch

> Suspect there is a bug in NetworkTopology.java chooseRandom function.
> -
>
> Key: HDFS-14181
> URL: https://issues.apache.org/jira/browse/HDFS-14181
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.9.2
>Reporter: Sihai Ke
>Priority: Major
> Attachments: 0001-add-UT-for-NetworkTopology.patch, 
> 0001-fix-NetworkTopology.java-chooseRandom-bug.patch, 
> image-2018-12-29-15-02-19-415.png
>
>
> During reading the hadoop NetworkTopology.java, I suspect there is a bug in 
> function 
> chooseRandom (line 498, hadoop version 2.9.2-RC0), 
>  I think there is a bug in{color:#f79232} code, ~excludedScope doesn't mean 
> availableNodes under Scope node, and I also add unit test for this and get an 
> exception.{color}
> bug code in the else.
> {code:java}
> // code placeholder
>  if (excludedScope == null) {
> availableNodes = countNumOfAvailableNodes(scope, excludedNodes);
>   } else {
> availableNodes =
> countNumOfAvailableNodes("~" + excludedScope, excludedNodes);
>   }{code}
> Source code:
> {code:java}
> // code placeholder
> protected Node chooseRandom(final String scope, String excludedScope,
> final Collection excludedNodes) {
>   if (excludedScope != null) {
> if (scope.startsWith(excludedScope)) {
>   return null;
> }
> if (!excludedScope.startsWith(scope)) {
>   excludedScope = null;
> }
>   }
>   Node node = getNode(scope);
>   if (!(node instanceof InnerNode)) {
> return excludedNodes != null && excludedNodes.contains(node) ?
> null : node;
>   }
>   InnerNode innerNode = (InnerNode)node;
>   int numOfDatanodes = innerNode.getNumOfLeaves();
>   if (excludedScope == null) {
> node = null;
>   } else {
> node = getNode(excludedScope);
> if (!(node instanceof InnerNode)) {
>   numOfDatanodes -= 1;
> } else {
>   numOfDatanodes -= ((InnerNode)node).getNumOfLeaves();
> }
>   }
>   if (numOfDatanodes <= 0) {
> LOG.debug("Failed to find datanode (scope=\"{}\" excludedScope=\"{}\")."
> + " numOfDatanodes={}",
> scope, excludedScope, numOfDatanodes);
> return null;
>   }
>   final int availableNodes;
>   if (excludedScope == null) {
> availableNodes = countNumOfAvailableNodes(scope, excludedNodes);
>   } else {
> availableNodes =
> countNumOfAvailableNodes("~" + excludedScope, excludedNodes);
>   }
>   LOG.debug("Choosing random from {} available nodes on node {},"
>   + " scope={}, excludedScope={}, excludeNodes={}. numOfDatanodes={}.",
>   availableNodes, innerNode, scope, excludedScope, excludedNodes,
>   numOfDatanodes);
>   Node ret = null;
>   if (availableNodes > 0) {
> ret = chooseRandom(innerNode, node, excludedNodes, numOfDatanodes,
> availableNodes);
>   }
>   LOG.debug("chooseRandom returning {}", ret);
>   return ret;
> }
> {code}
>  
>  
> Add Unit Test in TestClusterTopology.java, but get exception.
>  
> {code:java}
> // code placeholder
> @Test
> public void testChooseRandom1() {
>   // create the topology
>   NetworkTopology cluster = NetworkTopology.getInstance(new Configuration());
>   NodeElement node1 = getNewNode("node1", "/a1/b1/c1");
>   cluster.add(node1);
>   NodeElement node2 = getNewNode("node2", "/a1/b1/c1");
>   cluster.add(node2);
>   NodeElement node3 = getNewNode("node3", "/a1/b1/c2");
>   cluster.add(node3);
>   NodeElement node4 = getNewNode("node4", "/a1/b2/c3");
>   cluster.add(node4);
>   Node node = cluster.chooseRandom("/a1/b1", "/a1/b1/c1", null);
>   assertSame(node.getName(), "node3");
> }
> {code}
>  
> Exception:
> {code:java}
> // code placeholder
> java.lang.IllegalArgumentException: 1 should >= 2, and both should be 
> positive. 
> at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) 
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:567) 
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:544) 
> atorg.apache.hadoop.net.TestClusterTopology.testChooseRandom1(TestClusterTopology.java:198)
> {code}
>  
> {color:#f79232}!image-2018-12-29-15-02-19-415.png!{color}
>  
>  
> [~vagarychen] this change is imported in PR HDFS-11577, could you help to 
> check whether this is a bug ?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14182) Datanode usage histogram is clicked to show ip list

2018-12-31 Thread fengchuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731497#comment-16731497
 ] 

fengchuang commented on HDFS-14182:
---

[~dineshchitlangia]  Thank you for code review,I've fixed 
it.(HDFS-14182.002.patch)

> Datanode usage histogram is clicked to show ip list
> ---
>
> Key: HDFS-14182
> URL: https://issues.apache.org/jira/browse/HDFS-14182
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fengchuang
>Assignee: fengchuang
>Priority: Major
> Attachments: HDFS-14182.001.patch, HDFS-14182.002.patch, showip.jpeg
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14182) Datanode usage histogram is clicked to show ip list

2018-12-31 Thread fengchuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchuang updated HDFS-14182:
--
Attachment: HDFS-14182.002.patch

> Datanode usage histogram is clicked to show ip list
> ---
>
> Key: HDFS-14182
> URL: https://issues.apache.org/jira/browse/HDFS-14182
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fengchuang
>Assignee: fengchuang
>Priority: Major
> Attachments: HDFS-14182.001.patch, HDFS-14182.002.patch, showip.jpeg
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14181) Suspect there is a bug in NetworkTopology.java chooseRandom function.

2018-12-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731433#comment-16731433
 ] 

Íñigo Goiri commented on HDFS-14181:


I see value on having a proper use of this method instead of just hiding it 
behind a restricted use.
[~sihai] can you post a patch with your proposal for the fix and we start from 
there?

> Suspect there is a bug in NetworkTopology.java chooseRandom function.
> -
>
> Key: HDFS-14181
> URL: https://issues.apache.org/jira/browse/HDFS-14181
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.9.2
>Reporter: Sihai Ke
>Priority: Major
> Attachments: 0001-add-UT-for-NetworkTopology.patch, 
> image-2018-12-29-15-02-19-415.png
>
>
> During reading the hadoop NetworkTopology.java, I suspect there is a bug in 
> function 
> chooseRandom (line 498, hadoop version 2.9.2-RC0), 
>  I think there is a bug in{color:#f79232} code, ~excludedScope doesn't mean 
> availableNodes under Scope node, and I also add unit test for this and get an 
> exception.{color}
> bug code in the else.
> {code:java}
> // code placeholder
>  if (excludedScope == null) {
> availableNodes = countNumOfAvailableNodes(scope, excludedNodes);
>   } else {
> availableNodes =
> countNumOfAvailableNodes("~" + excludedScope, excludedNodes);
>   }{code}
> Source code:
> {code:java}
> // code placeholder
> protected Node chooseRandom(final String scope, String excludedScope,
> final Collection excludedNodes) {
>   if (excludedScope != null) {
> if (scope.startsWith(excludedScope)) {
>   return null;
> }
> if (!excludedScope.startsWith(scope)) {
>   excludedScope = null;
> }
>   }
>   Node node = getNode(scope);
>   if (!(node instanceof InnerNode)) {
> return excludedNodes != null && excludedNodes.contains(node) ?
> null : node;
>   }
>   InnerNode innerNode = (InnerNode)node;
>   int numOfDatanodes = innerNode.getNumOfLeaves();
>   if (excludedScope == null) {
> node = null;
>   } else {
> node = getNode(excludedScope);
> if (!(node instanceof InnerNode)) {
>   numOfDatanodes -= 1;
> } else {
>   numOfDatanodes -= ((InnerNode)node).getNumOfLeaves();
> }
>   }
>   if (numOfDatanodes <= 0) {
> LOG.debug("Failed to find datanode (scope=\"{}\" excludedScope=\"{}\")."
> + " numOfDatanodes={}",
> scope, excludedScope, numOfDatanodes);
> return null;
>   }
>   final int availableNodes;
>   if (excludedScope == null) {
> availableNodes = countNumOfAvailableNodes(scope, excludedNodes);
>   } else {
> availableNodes =
> countNumOfAvailableNodes("~" + excludedScope, excludedNodes);
>   }
>   LOG.debug("Choosing random from {} available nodes on node {},"
>   + " scope={}, excludedScope={}, excludeNodes={}. numOfDatanodes={}.",
>   availableNodes, innerNode, scope, excludedScope, excludedNodes,
>   numOfDatanodes);
>   Node ret = null;
>   if (availableNodes > 0) {
> ret = chooseRandom(innerNode, node, excludedNodes, numOfDatanodes,
> availableNodes);
>   }
>   LOG.debug("chooseRandom returning {}", ret);
>   return ret;
> }
> {code}
>  
>  
> Add Unit Test in TestClusterTopology.java, but get exception.
>  
> {code:java}
> // code placeholder
> @Test
> public void testChooseRandom1() {
>   // create the topology
>   NetworkTopology cluster = NetworkTopology.getInstance(new Configuration());
>   NodeElement node1 = getNewNode("node1", "/a1/b1/c1");
>   cluster.add(node1);
>   NodeElement node2 = getNewNode("node2", "/a1/b1/c1");
>   cluster.add(node2);
>   NodeElement node3 = getNewNode("node3", "/a1/b1/c2");
>   cluster.add(node3);
>   NodeElement node4 = getNewNode("node4", "/a1/b2/c3");
>   cluster.add(node4);
>   Node node = cluster.chooseRandom("/a1/b1", "/a1/b1/c1", null);
>   assertSame(node.getName(), "node3");
> }
> {code}
>  
> Exception:
> {code:java}
> // code placeholder
> java.lang.IllegalArgumentException: 1 should >= 2, and both should be 
> positive. 
> at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) 
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:567) 
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:544) 
> atorg.apache.hadoop.net.TestClusterTopology.testChooseRandom1(TestClusterTopology.java:198)
> {code}
>  
> {color:#f79232}!image-2018-12-29-15-02-19-415.png!{color}
>  
>  
> [~vagarychen] this change is imported in PR HDFS-11577, could you help to 
> check whether this is a bug ?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refresh command

2018-12-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731422#comment-16731422
 ] 

Íñigo Goiri commented on HDFS-13856:


[^HDFS-13856-HDFS-13891.002.patch] looks good.
In the description, there is an example on the change of the password.
I'm not very familiar with this interface; can anyone else take a look?

> RBF: RouterAdmin should support dfsrouteradmin -refresh command
> ---
>
> Key: HDFS-13856
> URL: https://issues.apache.org/jira/browse/HDFS-13856
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13856-HDFS-13891.001.patch, 
> HDFS-13856-HDFS-13891.002.patch, HDFS-13856.001.patch, HDFS-13856.002.patch
>
>
> Like namenode router should support refresh policy individually. For example, 
> we have implemented simple password authentication per rpc connection. The 
> password dict can be refreshed by generic refresh policy. We also want to 
> support this in RouterAdminServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14156) RBF: RollEdit command fail with router

2018-12-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731416#comment-16731416
 ] 

Íñigo Goiri commented on HDFS-14156:


Thanks [~shubham.dewan] for [^HDFS-14156.002.patch].
* Can you fix the checkstyles?
* Do we need to setup 6 DNs and EC for testing this?
* Can we just do this in TestRouterRpc?
* The assertTrue, should be assertEquals.

> RBF: RollEdit command fail with router
> --
>
> Key: HDFS-14156
> URL: https://issues.apache.org/jira/browse/HDFS-14156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Shubham Dewan
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14156.001.patch, HDFS-14156.002.patch
>
>
> {noformat}
> bin> ./hdfs dfsadmin -rollEdits
> rollEdits: Cannot cast java.lang.Long to long
> bin>
> {noformat}
> Trace :-
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.ClassCastException): Cannot 
> cast java.lang.Long to long
> at java.lang.Class.cast(Class.java:3369)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1085)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:982)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.rollEdits(RouterClientProtocol.java:900)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.rollEdits(RouterRpcServer.java:862)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:899)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> at org.apache.hadoop.ipc.Client.call(Client.java:1376)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy11.rollEdits(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rollEdits(ClientNamenodeProtocolTranslatorPB.java:804)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.rollEdits(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.rollEdits(DFSClient.java:2350)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.rollEdits(DistributedFileSystem.java:1550)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.rollEdits(DFSAdmin.java:850)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2353)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2568)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Updated] (HDFS-14156) RBF: RollEdit command fail with router

2018-12-31 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14156:
---
Summary: RBF: RollEdit command fail with router  (was: RBF : RollEdit 
command fail with router)

> RBF: RollEdit command fail with router
> --
>
> Key: HDFS-14156
> URL: https://issues.apache.org/jira/browse/HDFS-14156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Shubham Dewan
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14156.001.patch, HDFS-14156.002.patch
>
>
> {noformat}
> bin> ./hdfs dfsadmin -rollEdits
> rollEdits: Cannot cast java.lang.Long to long
> bin>
> {noformat}
> Trace :-
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.ClassCastException): Cannot 
> cast java.lang.Long to long
> at java.lang.Class.cast(Class.java:3369)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1085)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:982)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.rollEdits(RouterClientProtocol.java:900)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.rollEdits(RouterRpcServer.java:862)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:899)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> at org.apache.hadoop.ipc.Client.call(Client.java:1376)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy11.rollEdits(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rollEdits(ClientNamenodeProtocolTranslatorPB.java:804)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.rollEdits(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.rollEdits(DFSClient.java:2350)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.rollEdits(DistributedFileSystem.java:1550)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.rollEdits(DFSAdmin.java:850)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2353)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2568)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14084) Need for more stats in DFSClient

2018-12-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731409#comment-16731409
 ] 

Íñigo Goiri commented on HDFS-14084:


Thanks [~pranay_singh] for the patch.
Can you verify the failed unit tests?
A little concerned about TestSSLFactory.

> Need for more stats in DFSClient
> 
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Pranay Singh
>Assignee: Pranay Singh
>Priority: Minor
> Attachments: HDFS-14084.001.patch, HDFS-14084.002.patch, 
> HDFS-14084.003.patch, HDFS-14084.004.patch, HDFS-14084.005.patch, 
> HDFS-14084.006.patch, HDFS-14084.007.patch, HDFS-14084.008.patch, 
> HDFS-14084.009.patch, HDFS-14084.010.patch, HDFS-14084.011.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now 
> it's becoming more of like a general purpose filesystem. In most of the cases 
> there are issues with the Namenode so we have metrics to know the workload or 
> stress on Namenode.
> However, there is a need to have more statistics collected for different 
> operations/RPCs in DFSClient to know which RPC operations are taking longer 
> time or to know what is the frequency of the operation.These statistics can 
> be exposed to the users of DFS Client and they can periodically log or do 
> some sort of flow control if the response is slow. This will also help to 
> isolate HDFS issue in a mixed environment where on a node say we have Spark, 
> HBase and Impala running together. We can check the throughput of different 
> operation across client and isolate the problem caused because of noisy 
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no 
> conclusive evidence as to what caused the problem. If we had metrics or stats 
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -
>  HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14161) RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection

2018-12-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731408#comment-16731408
 ] 

Íñigo Goiri commented on HDFS-14161:


Thanks [~ferhui] for the patch.
* Can you add the space after the colon in line 437 of {{RouterRpcClient}}?
* In {{testConnectionNullException}} can we add some assert for the state of 
the metrics in the routers? It would be good to confirm we went to both routers 
and that we failed.
* Should we add an exception cause to the StandbyException we generate?

> RBF: Throw StandbyException instead of IOException so that client can retry 
> when can not get connection
> ---
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161-HDFS-13891.003.patch, 
> HDFS-14161-HDFS-13891.004.patch, HDFS-14161-HDFS-13891.005.patch, 
> HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>  

[jira] [Commented] (HDFS-14179) BlockReaderRemote#readNextPacket() should log the waiting time for packet read.

2018-12-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731403#comment-16731403
 ] 

Íñigo Goiri commented on HDFS-14179:


Can we also use logger style? Even though there's a guard, it is easier to read 
with the {} format.

> BlockReaderRemote#readNextPacket() should log the waiting time for packet 
> read.
> ---
>
> Key: HDFS-14179
> URL: https://issues.apache.org/jira/browse/HDFS-14179
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Surendra Singh Lilhore
>Assignee: Shubham Dewan
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-14179.001.patch
>
>
> Sometime read is reported very slow due to disk or some other reason. 
> {{BlockReaderRemote#readNextPacket()}} should print the datanode IP and 
> waiting time in trace log.
> {code:java}
> //Read packet headers.
> packetReceiver.receiveNextPacket(in);
> PacketHeader curHeader = packetReceiver.getHeader();
> curDataSlice = packetReceiver.getDataSlice();
> assert curDataSlice.capacity() == curHeader.getDataLen();
> LOG.trace("DFSClient readNextPacket got header {}", curHeader);{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14168) Fix TestWebHdfsTimeouts

2018-12-31 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731401#comment-16731401
 ] 

Chao Sun commented on HDFS-14168:
-

Oh, cool! I'll mark this as a duplicate then.

> Fix TestWebHdfsTimeouts
> ---
>
> Key: HDFS-14168
> URL: https://issues.apache.org/jira/browse/HDFS-14168
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: webhdfs
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
>
> The test TestWebHdfsTimeouts keep failing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14168) Fix TestWebHdfsTimeouts

2018-12-31 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun resolved HDFS-14168.
-
Resolution: Duplicate

> Fix TestWebHdfsTimeouts
> ---
>
> Key: HDFS-14168
> URL: https://issues.apache.org/jira/browse/HDFS-14168
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: webhdfs
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
>
> The test TestWebHdfsTimeouts keep failing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14168) Fix TestWebHdfsTimeouts

2018-12-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731400#comment-16731400
 ] 

Íñigo Goiri commented on HDFS-14168:


[~ayushtkn] has been trying to fix this in HDFS-14135.

> Fix TestWebHdfsTimeouts
> ---
>
> Key: HDFS-14168
> URL: https://issues.apache.org/jira/browse/HDFS-14168
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: webhdfs
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
>
> The test TestWebHdfsTimeouts keep failing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14032) [libhdfs++] Phase 2 improvements

2018-12-31 Thread Deepak Majeti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731331#comment-16731331
 ] 

Deepak Majeti commented on HDFS-14032:
--

We also want to implement support for wire-encryption. If anyone is interested 
in collaborating on this, please comment here.

> [libhdfs++] Phase 2 improvements
> 
>
> Key: HDFS-14032
> URL: https://issues.apache.org/jira/browse/HDFS-14032
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Major
>
> HDFS-8707 (libhdfs++) was merged to trunk, this is an umbrella JIRA for 
> things that still need to get done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14032) [libhdfs++] Phase 2 improvements

2018-12-31 Thread Deepak Majeti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731330#comment-16731330
 ] 

Deepak Majeti commented on HDFS-14032:
--

[~donglongchao] We don't have any roadmap to support "writes" yet. Do you have 
any resources to collaborate and work on this feature?

> [libhdfs++] Phase 2 improvements
> 
>
> Key: HDFS-14032
> URL: https://issues.apache.org/jira/browse/HDFS-14032
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Major
>
> HDFS-8707 (libhdfs++) was merged to trunk, this is an umbrella JIRA for 
> things that still need to get done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14129) RBF: Create new policy provider for router

2018-12-31 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731263#comment-16731263
 ] 

Surendra Singh Lilhore commented on HDFS-14129:
---

[~RANith] pls fix the findbug, checkstyle and whitespace warnings.

> RBF: Create new policy provider for router
> --
>
> Key: HDFS-14129
> URL: https://issues.apache.org/jira/browse/HDFS-14129
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-13532
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14129-HDFS-13891.001.patch, 
> HDFS-14129-HDFS-13891.002.patch, HDFS-14129-HDFS-13891.003.patch, 
> HDFS-14129-HDFS-13891.004.patch, HDFS-14129-HDFS-13891.005.patch
>
>
> Router is using *{{HDFSPolicyProvider}}*. We can't add new protocol in this 
> class for router, its better to create in policy provider for Router.
> {code:java}
> // Set service-level authorization security policy
> if (conf.getBoolean(HADOOP_SECURITY_AUTHORIZATION, false)) {
> this.adminServer.refreshServiceAcl(conf, new HDFSPolicyProvider());
> }
> {code}
> I got this issue when I am verified HDFS-14079 with secure cluster.
> {noformat}
> ./bin/hdfs dfsrouteradmin -ls /
> ls: Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol 
> is not known.
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  Protocol interface org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol is 
> not known.
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org