[jira] [Commented] (HADOOP-16662) Remove unnecessary InnerNode check in NetworkTopology#add()

2019-10-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955078#comment-16955078
 ] 

Hudson commented on HADOOP-16662:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17552 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17552/])
HADOOP-16662. Remove unnecessary InnerNode check in (ayushsaxena: rev 
2ae4b33d48db40bb0c222ac88df49e4b7c8e1493)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java


> Remove unnecessary InnerNode check in NetworkTopology#add()
> ---
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16662.001.patch
>
>
> The method of NetworkTopology#add() as follow:
> {code:java}
> /** Add a leaf node
>  * Update node counter  rack counter if necessary
>  * @param node node to be added; can be null
>  * @exception IllegalArgumentException if add a node to a leave 
>or node to be added is not a leaf
>  */
> public void add(Node node) {
>   if (node==null) return;
>   int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
>   netlock.writeLock().lock();
>   try {
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
>   LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
>   NodeBase.getPath(node), newDepth, this);
>   throw new InvalidTopologyException("Failed to add " + 
> NodeBase.getPath(node) +
>   ": You cannot have a rack and a non-rack node at the same " +
>   "level of the network topology.");
> }
> Node rack = getNodeForNetworkLocation(node);
> if (rack != null && !(rack instanceof InnerNode)) {
>   throw new IllegalArgumentException("Unexpected data node " 
>  + node.toString() 
>  + " at an illegal network location");
> }
> if (clusterMap.add(node)) {
>   LOG.info("Adding a new node: "+NodeBase.getPath(node));
>   if (rack == null) {
> incrementRacks();
>   }
>   if (!(node instanceof InnerNode)) {
> if (depthOfAllLeaves == -1) {
>   depthOfAllLeaves = node.getLevel();
> }
>   }
> }
> LOG.debug("NetworkTopology became:\n{}", this);
>   } finally {
> netlock.writeLock().unlock();
>   }
> }
> {code}
> {code:java}
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if (!(node instanceof InnerNode)) is invalid,since there is already a 
> judgement before as follow:
> if (!(node instanceof InnerNode)) {
>   if (depthOfAllLeaves == -1) {
> depthOfAllLeaves = node.getLevel();
>   }
> }{code}
> so i think if (!(node instanceof InnerNode)) should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16662) Remove unnecessary InnerNode check in NetworkTopology#add()

2019-10-18 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-16662:
--
Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Remove unnecessary InnerNode check in NetworkTopology#add()
> ---
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-16662.001.patch
>
>
> The method of NetworkTopology#add() as follow:
> {code:java}
> /** Add a leaf node
>  * Update node counter  rack counter if necessary
>  * @param node node to be added; can be null
>  * @exception IllegalArgumentException if add a node to a leave 
>or node to be added is not a leaf
>  */
> public void add(Node node) {
>   if (node==null) return;
>   int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
>   netlock.writeLock().lock();
>   try {
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
>   LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
>   NodeBase.getPath(node), newDepth, this);
>   throw new InvalidTopologyException("Failed to add " + 
> NodeBase.getPath(node) +
>   ": You cannot have a rack and a non-rack node at the same " +
>   "level of the network topology.");
> }
> Node rack = getNodeForNetworkLocation(node);
> if (rack != null && !(rack instanceof InnerNode)) {
>   throw new IllegalArgumentException("Unexpected data node " 
>  + node.toString() 
>  + " at an illegal network location");
> }
> if (clusterMap.add(node)) {
>   LOG.info("Adding a new node: "+NodeBase.getPath(node));
>   if (rack == null) {
> incrementRacks();
>   }
>   if (!(node instanceof InnerNode)) {
> if (depthOfAllLeaves == -1) {
>   depthOfAllLeaves = node.getLevel();
> }
>   }
> }
> LOG.debug("NetworkTopology became:\n{}", this);
>   } finally {
> netlock.writeLock().unlock();
>   }
> }
> {code}
> {code:java}
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if (!(node instanceof InnerNode)) is invalid,since there is already a 
> judgement before as follow:
> if (!(node instanceof InnerNode)) {
>   if (depthOfAllLeaves == -1) {
> depthOfAllLeaves = node.getLevel();
>   }
> }{code}
> so i think if (!(node instanceof InnerNode)) should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16662) Remove unnecessary InnerNode check in NetworkTopology#add()

2019-10-18 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955075#comment-16955075
 ] 

Ayush Saxena commented on HADOOP-16662:
---

Committed to trunk.
Thanx [~leosun08] for the contribution!!!

> Remove unnecessary InnerNode check in NetworkTopology#add()
> ---
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16662.001.patch
>
>
> The method of NetworkTopology#add() as follow:
> {code:java}
> /** Add a leaf node
>  * Update node counter  rack counter if necessary
>  * @param node node to be added; can be null
>  * @exception IllegalArgumentException if add a node to a leave 
>or node to be added is not a leaf
>  */
> public void add(Node node) {
>   if (node==null) return;
>   int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
>   netlock.writeLock().lock();
>   try {
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
>   LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
>   NodeBase.getPath(node), newDepth, this);
>   throw new InvalidTopologyException("Failed to add " + 
> NodeBase.getPath(node) +
>   ": You cannot have a rack and a non-rack node at the same " +
>   "level of the network topology.");
> }
> Node rack = getNodeForNetworkLocation(node);
> if (rack != null && !(rack instanceof InnerNode)) {
>   throw new IllegalArgumentException("Unexpected data node " 
>  + node.toString() 
>  + " at an illegal network location");
> }
> if (clusterMap.add(node)) {
>   LOG.info("Adding a new node: "+NodeBase.getPath(node));
>   if (rack == null) {
> incrementRacks();
>   }
>   if (!(node instanceof InnerNode)) {
> if (depthOfAllLeaves == -1) {
>   depthOfAllLeaves = node.getLevel();
> }
>   }
> }
> LOG.debug("NetworkTopology became:\n{}", this);
>   } finally {
> netlock.writeLock().unlock();
>   }
> }
> {code}
> {code:java}
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if (!(node instanceof InnerNode)) is invalid,since there is already a 
> judgement before as follow:
> if (!(node instanceof InnerNode)) {
>   if (depthOfAllLeaves == -1) {
> depthOfAllLeaves = node.getLevel();
>   }
> }{code}
> so i think if (!(node instanceof InnerNode)) should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16662) Remove unnecessary InnerNode check in NetworkTopology#add()

2019-10-18 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-16662:
--
Summary: Remove unnecessary InnerNode check in NetworkTopology#add()  (was: 
Remove unnecessary in NetworkTopology#add())

> Remove unnecessary InnerNode check in NetworkTopology#add()
> ---
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16662.001.patch
>
>
> The method of NetworkTopology#add() as follow:
> {code:java}
> /** Add a leaf node
>  * Update node counter  rack counter if necessary
>  * @param node node to be added; can be null
>  * @exception IllegalArgumentException if add a node to a leave 
>or node to be added is not a leaf
>  */
> public void add(Node node) {
>   if (node==null) return;
>   int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
>   netlock.writeLock().lock();
>   try {
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
>   LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
>   NodeBase.getPath(node), newDepth, this);
>   throw new InvalidTopologyException("Failed to add " + 
> NodeBase.getPath(node) +
>   ": You cannot have a rack and a non-rack node at the same " +
>   "level of the network topology.");
> }
> Node rack = getNodeForNetworkLocation(node);
> if (rack != null && !(rack instanceof InnerNode)) {
>   throw new IllegalArgumentException("Unexpected data node " 
>  + node.toString() 
>  + " at an illegal network location");
> }
> if (clusterMap.add(node)) {
>   LOG.info("Adding a new node: "+NodeBase.getPath(node));
>   if (rack == null) {
> incrementRacks();
>   }
>   if (!(node instanceof InnerNode)) {
> if (depthOfAllLeaves == -1) {
>   depthOfAllLeaves = node.getLevel();
> }
>   }
> }
> LOG.debug("NetworkTopology became:\n{}", this);
>   } finally {
> netlock.writeLock().unlock();
>   }
> }
> {code}
> {code:java}
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if (!(node instanceof InnerNode)) is invalid,since there is already a 
> judgement before as follow:
> if (!(node instanceof InnerNode)) {
>   if (depthOfAllLeaves == -1) {
> depthOfAllLeaves = node.getLevel();
>   }
> }{code}
> so i think if (!(node instanceof InnerNode)) should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16662) Remove unnecessary in NetworkTopology#add()

2019-10-18 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-16662:
--
Summary: Remove unnecessary in NetworkTopology#add()  (was: Remove invalid 
judgment in NetworkTopology#add())

> Remove unnecessary in NetworkTopology#add()
> ---
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16662.001.patch
>
>
> The method of NetworkTopology#add() as follow:
> {code:java}
> /** Add a leaf node
>  * Update node counter  rack counter if necessary
>  * @param node node to be added; can be null
>  * @exception IllegalArgumentException if add a node to a leave 
>or node to be added is not a leaf
>  */
> public void add(Node node) {
>   if (node==null) return;
>   int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
>   netlock.writeLock().lock();
>   try {
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
>   LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
>   NodeBase.getPath(node), newDepth, this);
>   throw new InvalidTopologyException("Failed to add " + 
> NodeBase.getPath(node) +
>   ": You cannot have a rack and a non-rack node at the same " +
>   "level of the network topology.");
> }
> Node rack = getNodeForNetworkLocation(node);
> if (rack != null && !(rack instanceof InnerNode)) {
>   throw new IllegalArgumentException("Unexpected data node " 
>  + node.toString() 
>  + " at an illegal network location");
> }
> if (clusterMap.add(node)) {
>   LOG.info("Adding a new node: "+NodeBase.getPath(node));
>   if (rack == null) {
> incrementRacks();
>   }
>   if (!(node instanceof InnerNode)) {
> if (depthOfAllLeaves == -1) {
>   depthOfAllLeaves = node.getLevel();
> }
>   }
> }
> LOG.debug("NetworkTopology became:\n{}", this);
>   } finally {
> netlock.writeLock().unlock();
>   }
> }
> {code}
> {code:java}
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if (!(node instanceof InnerNode)) is invalid,since there is already a 
> judgement before as follow:
> if (!(node instanceof InnerNode)) {
>   if (depthOfAllLeaves == -1) {
> depthOfAllLeaves = node.getLevel();
>   }
> }{code}
> so i think if (!(node instanceof InnerNode)) should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16662) Remove invalid judgment in NetworkTopology#add()

2019-10-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955072#comment-16955072
 ] 

Hadoop QA commented on HADOOP-16662:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 34m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
20s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HADOOP-16662 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12983489/HADOOP-16662.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8e43c9a42dc7 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 155864d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16604/testReport/ |
| Max. process+thread count | 1345 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16604/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Remove invalid judgment in 

[jira] [Commented] (HADOOP-16662) Remove invalid judgment in NetworkTopology#add()

2019-10-18 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955063#comment-16955063
 ] 

Ayush Saxena commented on HADOOP-16662:
---

Thanx [~leosun08] for the patch.
v001 LGTM +1(Pending Jenkins)

> Remove invalid judgment in NetworkTopology#add()
> 
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16662.001.patch
>
>
> The method of NetworkTopology#add() as follow:
> {code:java}
> /** Add a leaf node
>  * Update node counter  rack counter if necessary
>  * @param node node to be added; can be null
>  * @exception IllegalArgumentException if add a node to a leave 
>or node to be added is not a leaf
>  */
> public void add(Node node) {
>   if (node==null) return;
>   int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
>   netlock.writeLock().lock();
>   try {
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
>   LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
>   NodeBase.getPath(node), newDepth, this);
>   throw new InvalidTopologyException("Failed to add " + 
> NodeBase.getPath(node) +
>   ": You cannot have a rack and a non-rack node at the same " +
>   "level of the network topology.");
> }
> Node rack = getNodeForNetworkLocation(node);
> if (rack != null && !(rack instanceof InnerNode)) {
>   throw new IllegalArgumentException("Unexpected data node " 
>  + node.toString() 
>  + " at an illegal network location");
> }
> if (clusterMap.add(node)) {
>   LOG.info("Adding a new node: "+NodeBase.getPath(node));
>   if (rack == null) {
> incrementRacks();
>   }
>   if (!(node instanceof InnerNode)) {
> if (depthOfAllLeaves == -1) {
>   depthOfAllLeaves = node.getLevel();
> }
>   }
> }
> LOG.debug("NetworkTopology became:\n{}", this);
>   } finally {
> netlock.writeLock().unlock();
>   }
> }
> {code}
> {code:java}
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if (!(node instanceof InnerNode)) is invalid,since there is already a 
> judgement before as follow:
> if (!(node instanceof InnerNode)) {
>   if (depthOfAllLeaves == -1) {
> depthOfAllLeaves = node.getLevel();
>   }
> }{code}
> so i think if (!(node instanceof InnerNode)) should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2019-10-18 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955055#comment-16955055
 ] 

Lisheng Sun commented on HADOOP-8159:
-

I open Jira  HADOOP-16662  to tackle this. Thank you. [~elgoiri] 

> NetworkTopology: getLeaf should check for invalid topologies
> 
>
> Key: HADOOP-8159
> URL: https://issues.apache.org/jira/browse/HADOOP-8159
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 1.1.0, 2.0.0-alpha
>
> Attachments: HADOOP-8159-b1.005.patch, HADOOP-8159-b1.007.patch, 
> HADOOP-8159.005.patch, HADOOP-8159.006.patch, HADOOP-8159.007.patch, 
> HADOOP-8159.008.patch, HADOOP-8159.009.patch
>
>
> Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
> InnerNode object itself. This results in us getting ClassCastException 
> sometimes when the network topology is invalid. We should have a less 
> confusing exception message for this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16662) Remove invalid judgment in NetworkTopology#add()

2019-10-18 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16662:
-
Status: Patch Available  (was: Open)

> Remove invalid judgment in NetworkTopology#add()
> 
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16662.001.patch
>
>
> The method of NetworkTopology#add() as follow:
> {code:java}
> /** Add a leaf node
>  * Update node counter  rack counter if necessary
>  * @param node node to be added; can be null
>  * @exception IllegalArgumentException if add a node to a leave 
>or node to be added is not a leaf
>  */
> public void add(Node node) {
>   if (node==null) return;
>   int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
>   netlock.writeLock().lock();
>   try {
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
>   LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
>   NodeBase.getPath(node), newDepth, this);
>   throw new InvalidTopologyException("Failed to add " + 
> NodeBase.getPath(node) +
>   ": You cannot have a rack and a non-rack node at the same " +
>   "level of the network topology.");
> }
> Node rack = getNodeForNetworkLocation(node);
> if (rack != null && !(rack instanceof InnerNode)) {
>   throw new IllegalArgumentException("Unexpected data node " 
>  + node.toString() 
>  + " at an illegal network location");
> }
> if (clusterMap.add(node)) {
>   LOG.info("Adding a new node: "+NodeBase.getPath(node));
>   if (rack == null) {
> incrementRacks();
>   }
>   if (!(node instanceof InnerNode)) {
> if (depthOfAllLeaves == -1) {
>   depthOfAllLeaves = node.getLevel();
> }
>   }
> }
> LOG.debug("NetworkTopology became:\n{}", this);
>   } finally {
> netlock.writeLock().unlock();
>   }
> }
> {code}
> {code:java}
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if (!(node instanceof InnerNode)) is invalid,since there is already a 
> judgement before as follow:
> if (!(node instanceof InnerNode)) {
>   if (depthOfAllLeaves == -1) {
> depthOfAllLeaves = node.getLevel();
>   }
> }{code}
> so i think if (!(node instanceof InnerNode)) should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16662) Remove invalid judgment in NetworkTopology#add()

2019-10-18 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16662:
-
Attachment: HADOOP-16662.001.patch

> Remove invalid judgment in NetworkTopology#add()
> 
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16662.001.patch
>
>
> The method of NetworkTopology#add() as follow:
> {code:java}
> /** Add a leaf node
>  * Update node counter  rack counter if necessary
>  * @param node node to be added; can be null
>  * @exception IllegalArgumentException if add a node to a leave 
>or node to be added is not a leaf
>  */
> public void add(Node node) {
>   if (node==null) return;
>   int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
>   netlock.writeLock().lock();
>   try {
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
>   LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
>   NodeBase.getPath(node), newDepth, this);
>   throw new InvalidTopologyException("Failed to add " + 
> NodeBase.getPath(node) +
>   ": You cannot have a rack and a non-rack node at the same " +
>   "level of the network topology.");
> }
> Node rack = getNodeForNetworkLocation(node);
> if (rack != null && !(rack instanceof InnerNode)) {
>   throw new IllegalArgumentException("Unexpected data node " 
>  + node.toString() 
>  + " at an illegal network location");
> }
> if (clusterMap.add(node)) {
>   LOG.info("Adding a new node: "+NodeBase.getPath(node));
>   if (rack == null) {
> incrementRacks();
>   }
>   if (!(node instanceof InnerNode)) {
> if (depthOfAllLeaves == -1) {
>   depthOfAllLeaves = node.getLevel();
> }
>   }
> }
> LOG.debug("NetworkTopology became:\n{}", this);
>   } finally {
> netlock.writeLock().unlock();
>   }
> }
> {code}
> {code:java}
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if (!(node instanceof InnerNode)) is invalid,since there is already a 
> judgement before as follow:
> if (!(node instanceof InnerNode)) {
>   if (depthOfAllLeaves == -1) {
> depthOfAllLeaves = node.getLevel();
>   }
> }{code}
> so i think if (!(node instanceof InnerNode)) should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16662) Remove invalid judgment in NetworkTopology#add()

2019-10-18 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16662:
-
Description: 
The method of NetworkTopology#add() as follow:
{code:java}
/** Add a leaf node
 * Update node counter  rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
  LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
  NodeBase.getPath(node), newDepth, this);
  throw new InvalidTopologyException("Failed to add " + 
NodeBase.getPath(node) +
  ": You cannot have a rack and a non-rack node at the same " +
  "level of the network topology.");
}
Node rack = getNodeForNetworkLocation(node);
if (rack != null && !(rack instanceof InnerNode)) {
  throw new IllegalArgumentException("Unexpected data node " 
 + node.toString() 
 + " at an illegal network location");
}
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}
{code}
{code:java}
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}

if (!(node instanceof InnerNode)) is invalid,since there is already a judgement 
before as follow:

if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  }
}{code}
so i think if (!(node instanceof InnerNode)) should be removed.

  was:
The method of NetworkTopology#add() as follow:
{code:java}
/** Add a leaf node
 * Update node counter  rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
  LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
  NodeBase.getPath(node), newDepth, this);
  throw new InvalidTopologyException("Failed to add " + 
NodeBase.getPath(node) +
  ": You cannot have a rack and a non-rack node at the same " +
  "level of the network topology.");
}
Node rack = getNodeForNetworkLocation(node);
if (rack != null && !(rack instanceof InnerNode)) {
  throw new IllegalArgumentException("Unexpected data node " 
 + node.toString() 
 + " at an illegal network location");
}
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}
{code}
{code:java}
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if (!(node instanceof InnerNode)) is invalid,since there is already a judgement 
before as follow:
if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  }
}{code}

so i think if (!(node instanceof InnerNode)) should be removed.


> Remove invalid judgment in NetworkTopology#add()
> 
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng 

[jira] [Updated] (HADOOP-16662) Remove invalid judgment in NetworkTopology#add()

2019-10-18 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16662:
-
Description: 
The method of NetworkTopology#add() as follow:
{code:java}
/** Add a leaf node
 * Update node counter  rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
  LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
  NodeBase.getPath(node), newDepth, this);
  throw new InvalidTopologyException("Failed to add " + 
NodeBase.getPath(node) +
  ": You cannot have a rack and a non-rack node at the same " +
  "level of the network topology.");
}
Node rack = getNodeForNetworkLocation(node);
if (rack != null && !(rack instanceof InnerNode)) {
  throw new IllegalArgumentException("Unexpected data node " 
 + node.toString() 
 + " at an illegal network location");
}
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}
{code}
{code:java}
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if (!(node instanceof InnerNode)) is invalid,since there is already a judgement 
before as follow:
if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  }
}{code}

so i think if (!(node instanceof InnerNode)) should be removed.

  was:
The method of NetworkTopology#add() as follow:
{code:java}
/** Add a leaf node
 * Update node counter  rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
  LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
  NodeBase.getPath(node), newDepth, this);
  throw new InvalidTopologyException("Failed to add " + 
NodeBase.getPath(node) +
  ": You cannot have a rack and a non-rack node at the same " +
  "level of the network topology.");
}
Node rack = getNodeForNetworkLocation(node);
if (rack != null && !(rack instanceof InnerNode)) {
  throw new IllegalArgumentException("Unexpected data node " 
 + node.toString() 
 + " at an illegal network location");
}
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}
{code}


> Remove invalid judgment in NetworkTopology#add()
> 
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
>
> The method of NetworkTopology#add() as follow:
> {code:java}
> /** Add a leaf node
>  * Update node counter  rack counter if necessary
>  * @param node node to be added; can be null
>  * @exception IllegalArgumentException if add a node to a leave 
>or node to be added is not a leaf
>  */
> public void add(Node node) {
>   if (node==null) return;
>   int 

[jira] [Created] (HADOOP-16662) Remove invalid judgment in NetworkTopology#add()

2019-10-18 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-16662:


 Summary: Remove invalid judgment in NetworkTopology#add()
 Key: HADOOP-16662
 URL: https://issues.apache.org/jira/browse/HADOOP-16662
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Lisheng Sun
Assignee: Lisheng Sun


The method of NetworkTopology#add() as follow:
{code:java}
/** Add a leaf node
 * Update node counter  rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
  LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
  NodeBase.getPath(node), newDepth, this);
  throw new InvalidTopologyException("Failed to add " + 
NodeBase.getPath(node) +
  ": You cannot have a rack and a non-rack node at the same " +
  "level of the network topology.");
}
Node rack = getNodeForNetworkLocation(node);
if (rack != null && !(rack instanceof InnerNode)) {
  throw new IllegalArgumentException("Unexpected data node " 
 + node.toString() 
 + " at an illegal network location");
}
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.6 in Hadoop

2019-10-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954952#comment-16954952
 ] 

Hudson commented on HADOOP-16579:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17549 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17549/])
HADOOP-16579. Upgrade to Curator 4.2.0 and ZooKeeper 3.5.5 (#1656). (weichiu: 
rev 6d92aa7c30439d78deb68cc3186a67557544681f)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFailoverController.java
* (edit) hadoop-project/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/lib/TestZKClient.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/ClientBaseWithFixes.java
* (edit) 
hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistrySecurity.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestLeaderElectorService.java
* (edit) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java
* (edit) 
hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/ZookeeperConfigOptions.java
* (edit) 
hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/server/services/MicroZookeeperService.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java


> Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.6 in Hadoop
> -
>
> Key: HADOOP-16579
> URL: https://issues.apache.org/jira/browse/HADOOP-16579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Mate Szalay-Beko
>Assignee: Norbert Kalmár
>Priority: Major
> Fix For: 3.3.0
>
>
> *Update:* the original idea was to only update Curator but keep the old 
> ZooKeeper version in Hadoop. However, we encountered some run-time 
> backward-incompatibility during unit tests with Curator 4.2.0 and ZooKeeper 
> 3.5.5. We haven't really investigated deeply these issues, but upgraded to 
> ZooKeeper 3.5.5 (and later to 3.5.6). We had to do some minor fixes in the 
> unit tests (and also had to change some deprecated Curator API calls), but 
> [the latest PR|https://github.com/apache/hadoop/pull/1656] seems to be stable.
> ZooKeeper 3.5.6 just got released during our work. (I think the official 
> announcement will get out maybe tomorrow, but it is already available in 
> maven central or on the [Apache ZooKeeper ftp 
> site|https://www-eu.apache.org/dist/zookeeper/]). It is considered to be a 
> stable version, contains some minor fixes and improvements, plus some CVE 
> fixes. See the [release 
> notes|https://github.com/apache/zookeeper/blob/branch-3.5.6/zookeeper-docs/src/main/resources/markdown/releasenotes.md].
>  
> 
> Currently in Hadoop we are using [ZooKeeper version 
> 3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
>  ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
> many new features (including SSL related improvements which can be very 
> important for production use; see [the release 
> notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).
> Apache Curator is a high level ZooKeeper client library, that makes it easier 
> to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
> 2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
>  and [in Ozone we use Curator 
> 2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].
> Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 
> 3.x is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, 
> the latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 
> 3.5.x. (see [the relevant Curator 
> page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
> have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
> components are doing it right now (e.g. Hive).
> *The aims of this task are* to:
>  - change Curator version in Hadoop to the latest stable 4.x version 
> (currently 4.2.0)
>  - also make sure we don't have multiple ZooKeeper versions in the classpath 
> to avoid runtime problems (it is 
> 

[jira] [Resolved] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.6 in Hadoop

2019-10-18 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-16579.
--
Fix Version/s: 3.3.0
   Resolution: Fixed

PR is merged into trunk.
Thanks [~nkalmar] and [~symat]!

> Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.6 in Hadoop
> -
>
> Key: HADOOP-16579
> URL: https://issues.apache.org/jira/browse/HADOOP-16579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Mate Szalay-Beko
>Assignee: Norbert Kalmár
>Priority: Major
> Fix For: 3.3.0
>
>
> *Update:* the original idea was to only update Curator but keep the old 
> ZooKeeper version in Hadoop. However, we encountered some run-time 
> backward-incompatibility during unit tests with Curator 4.2.0 and ZooKeeper 
> 3.5.5. We haven't really investigated deeply these issues, but upgraded to 
> ZooKeeper 3.5.5 (and later to 3.5.6). We had to do some minor fixes in the 
> unit tests (and also had to change some deprecated Curator API calls), but 
> [the latest PR|https://github.com/apache/hadoop/pull/1656] seems to be stable.
> ZooKeeper 3.5.6 just got released during our work. (I think the official 
> announcement will get out maybe tomorrow, but it is already available in 
> maven central or on the [Apache ZooKeeper ftp 
> site|https://www-eu.apache.org/dist/zookeeper/]). It is considered to be a 
> stable version, contains some minor fixes and improvements, plus some CVE 
> fixes. See the [release 
> notes|https://github.com/apache/zookeeper/blob/branch-3.5.6/zookeeper-docs/src/main/resources/markdown/releasenotes.md].
>  
> 
> Currently in Hadoop we are using [ZooKeeper version 
> 3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
>  ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
> many new features (including SSL related improvements which can be very 
> important for production use; see [the release 
> notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).
> Apache Curator is a high level ZooKeeper client library, that makes it easier 
> to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
> 2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
>  and [in Ozone we use Curator 
> 2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].
> Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 
> 3.x is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, 
> the latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 
> 3.5.x. (see [the relevant Curator 
> page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
> have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
> components are doing it right now (e.g. Hive).
> *The aims of this task are* to:
>  - change Curator version in Hadoop to the latest stable 4.x version 
> (currently 4.2.0)
>  - also make sure we don't have multiple ZooKeeper versions in the classpath 
> to avoid runtime problems (it is 
> [recommended|https://curator.apache.org/zk-compatibility.html] to exclude the 
> ZooKeeper which come with Curator, so that there will be only a single 
> ZooKeeper version used runtime in Hadoop)
> In this ticket we still don't want to change the default ZooKeeper version in 
> Hadoop, we only want to make it possible for the community to be able to 
> build / use Hadoop with the new ZooKeeper (e.g. if they need to secure the 
> ZooKeeper communication with SSL, what is only supported in the new ZooKeeper 
> version). Upgrading to Curator 4.x should keep Hadoop to be compatible with 
> both ZooKeeper 3.4 and 3.5.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #1656: HADOOP-16579. Upgrade to Curator 4.2.0 and ZooKeeper 3.5.5

2019-10-18 Thread GitBox
jojochuang merged pull request #1656: HADOOP-16579. Upgrade to Curator 4.2.0 
and ZooKeeper 3.5.5
URL: https://github.com/apache/hadoop/pull/1656
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on issue #1656: HADOOP-16579. Upgrade to Curator 4.2.0 and ZooKeeper 3.5.5

2019-10-18 Thread GitBox
jojochuang commented on issue #1656: HADOOP-16579. Upgrade to Curator 4.2.0 and 
ZooKeeper 3.5.5
URL: https://github.com/apache/hadoop/pull/1656#issuecomment-543909856
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16658) S3A connector does not support including the token renewer in the token identifier

2019-10-18 Thread Philip Zampino (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954916#comment-16954916
 ] 

Philip Zampino commented on HADOOP-16658:
-

[https://github.com/apache/hadoop/pull/1664]

> S3A connector does not support including the token renewer in the token 
> identifier
> --
>
> Key: HADOOP-16658
> URL: https://issues.apache.org/jira/browse/HADOOP-16658
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-aws
>Affects Versions: 3.3.0
>Reporter: Philip Zampino
>Priority: Major
>
> To support management of delegation token expirations by way of the Yarn 
> TokenRenewer facility, delegation token identifiers MUST include a valid 
> renewer or the associated TokenRenewer implementation will be ignored.
> Currently, the renewer isn't propagated to the bindings for token creation, 
> which means the tokens can't ever have the renewer set on them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16612) Track Azure Blob File System client-perceived latency

2019-10-18 Thread Da Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954848#comment-16954848
 ] 

Da Zhou commented on HADOOP-16612:
--

Hi [~jeeteshm] could you sync with the latest trunk? the first two test 
failures with status code should be fixed with this commit 
[https://github.com/apache/hadoop/pull/1498] :
You can ignore the timeout for your scale tests for now.

> Track Azure Blob File System client-perceived latency
> -
>
> Key: HADOOP-16612
> URL: https://issues.apache.org/jira/browse/HADOOP-16612
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, hdfs-client
>Reporter: Jeetesh Mangwani
>Assignee: Jeetesh Mangwani
>Priority: Major
>
> Track the end-to-end performance of ADLS Gen 2 REST APIs by measuring 
> latencies in the Hadoop ABFS driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pzampino opened a new pull request #1664: HADOOP-16658 - S3A connector does not support including the token ren…

2019-10-18 Thread GitBox
pzampino opened a new pull request #1664: HADOOP-16658 - S3A connector does not 
support including the token ren…
URL: https://github.com/apache/hadoop/pull/1664
 
 
   …ewer in the token identifier
   
   Ran 'mvn verify' with endpoint s3.us-east-1.amazonaws.com
   Two tests failed, but passed when run individually:
   * ITestS3AConfiguration#testAutomaticProxyPortSelection
   * ITestS3AInconsistency#testGetFileStatus
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16612) Track Azure Blob File System client-perceived latency

2019-10-18 Thread Billie Rinaldi (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954832#comment-16954832
 ] 

Billie Rinaldi commented on HADOOP-16612:
-

Thanks for working on this issue, [~jeeteshm]. Have you evaluated instrumenting 
the ABFS driver with Hadoop's [existing tracing 
system|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Tracing.html]
 instead of (or in addition to) creating a new instrumentation system?

> Track Azure Blob File System client-perceived latency
> -
>
> Key: HADOOP-16612
> URL: https://issues.apache.org/jira/browse/HADOOP-16612
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, hdfs-client
>Reporter: Jeetesh Mangwani
>Assignee: Jeetesh Mangwani
>Priority: Major
>
> Track the end-to-end performance of ADLS Gen 2 REST APIs by measuring 
> latencies in the Hadoop ABFS driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2019-10-18 Thread Jira


[ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954810#comment-16954810
 ] 

Íñigo Goiri commented on HADOOP-8159:
-

[~leosun08], that makes sense, it looks redundant.
Please open a separate JIRA to tackle this.

> NetworkTopology: getLeaf should check for invalid topologies
> 
>
> Key: HADOOP-8159
> URL: https://issues.apache.org/jira/browse/HADOOP-8159
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 1.1.0, 2.0.0-alpha
>
> Attachments: HADOOP-8159-b1.005.patch, HADOOP-8159-b1.007.patch, 
> HADOOP-8159.005.patch, HADOOP-8159.006.patch, HADOOP-8159.007.patch, 
> HADOOP-8159.008.patch, HADOOP-8159.009.patch
>
>
> Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
> InnerNode object itself. This results in us getting ClassCastException 
> sometimes when the network topology is invalid. We should have a less 
> confusing exception message for this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-10-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954708#comment-16954708
 ] 

Hudson commented on HADOOP-16152:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17548 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17548/])
HADOOP-16152. Upgrade Eclipse Jetty version to 9.4.x. Contributed by (weichiu: 
rev 3d41f330186f6481850b46e0c345d3ecf7b1b818)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java
* (edit) hadoop-client-modules/hadoop-client-runtime/pom.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpRequestLog.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpRequestLog.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml
* (edit) hadoop-client-modules/hadoop-client-minicluster/pom.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/DatanodeHttpServer.java
* (edit) hadoop-project/pom.xml
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java


> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16152.002.patch, HADOOP-16152.002.patch, 
> HADOOP-16152.003.patch, HADOOP-16152.004.patch, HADOOP-16152.005.patch, 
> HADOOP-16152.006.patch, HADOOP-16152.v1.patch
>
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: HIVE-21211



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-10-18 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16152:
-
Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~yumwang] for the initial work and [~smeng] for carrying it on till the 
finish line.
Thanks [~ste...@apache.org] too for comments.

Pushed the patch to trunk. I'll probably want to cherry pick the commit to 
lower branches later.

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16152.002.patch, HADOOP-16152.002.patch, 
> HADOOP-16152.003.patch, HADOOP-16152.004.patch, HADOOP-16152.005.patch, 
> HADOOP-16152.006.patch, HADOOP-16152.v1.patch
>
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: HIVE-21211



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16661) Test and doc TLS 1.3 support after HADOOP-16152

2019-10-18 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-16661:


 Summary: Test and doc TLS 1.3 support after HADOOP-16152
 Key: HADOOP-16661
 URL: https://issues.apache.org/jira/browse/HADOOP-16661
 Project: Hadoop Common
  Issue Type: Task
Reporter: Wei-Chiu Chuang


HADOOP-16152 is going to update Jetty from 9.3 to 9.4.20, which should allow us 
to support TLS 1.3 https://www.eclipse.org/lists/jetty-users/msg08569.html

We should test and document the support of TLS 1.3. Assuming its support 
depends on JDK, then it is likely only supported in JDK11 and above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #1641: HDDS-2281. ContainerStateMachine#handleWriteChunk should ignore close container exception.

2019-10-18 Thread GitBox
bshashikant commented on a change in pull request #1641: HDDS-2281. 
ContainerStateMachine#handleWriteChunk should ignore close container exception.
URL: https://github.com/apache/hadoop/pull/1641#discussion_r336431591
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java
 ##
 @@ -418,6 +424,108 @@ public void 
testApplyTransactionIdempotencyWithClosedContainer()
 Assert.assertFalse(snapshot.getPath().equals(latestSnapshot.getPath()));
   }
 
+  // The test injects multiple write chunk requests along with closed container
+  // request thereby inducing a situation where a writeStateMachine call
+  // gets executed when the closed container apply completes thereby
+  // failing writeStateMachine call. In any case, our stateMachine should
+  // not be marked unhealthy and pipeline should not fail if container gets
+  // closed here.
+  @Test
+  public void testWriteStateMachineDataIdempotencyWithClosedContainer()
+  throws Exception {
+OzoneOutputStream key =
+objectStore.getVolume(volumeName).getBucket(bucketName)
+.createKey("ratis-1", 1024, ReplicationType.RATIS,
+ReplicationFactor.ONE, new HashMap<>());
+// First write and flush creates a container in the datanode
+key.write("ratis".getBytes());
+key.flush();
+key.write("ratis".getBytes());
+KeyOutputStream groupOutputStream = (KeyOutputStream) 
key.getOutputStream();
+List locationInfoList =
+groupOutputStream.getLocationInfoList();
+Assert.assertEquals(1, locationInfoList.size());
+OmKeyLocationInfo omKeyLocationInfo = locationInfoList.get(0);
+ContainerData containerData =
+cluster.getHddsDatanodes().get(0).getDatanodeStateMachine()
+.getContainer().getContainerSet()
+.getContainer(omKeyLocationInfo.getContainerID())
+.getContainerData();
+Assert.assertTrue(containerData instanceof KeyValueContainerData);
+key.close();
+ContainerStateMachine stateMachine =
+(ContainerStateMachine) ContainerTestHelper.getStateMachine(cluster);
+SimpleStateMachineStorage storage =
+(SimpleStateMachineStorage) stateMachine.getStateMachineStorage();
+Path parentPath = storage.findLatestSnapshot().getFile().getPath();
+// Since the snapshot threshold is set to 1, since there are
+// applyTransactions, we should see snapshots
+Assert.assertTrue(parentPath.getParent().toFile().listFiles().length > 0);
+FileInfo snapshot = storage.findLatestSnapshot().getFile();
+Assert.assertNotNull(snapshot);
+long containerID = omKeyLocationInfo.getContainerID();
+Pipeline pipeline = cluster.getStorageContainerLocationClient()
+.getContainerWithPipeline(containerID).getPipeline();
+XceiverClientSpi xceiverClient =
+xceiverClientManager.acquireClient(pipeline);
+CountDownLatch latch = new CountDownLatch(100);
+int count = 0;
+AtomicInteger failCount = new AtomicInteger(0);
+Runnable r1 = () -> {
+  try {
+ContainerProtos.ContainerCommandRequestProto.Builder request =
+ContainerProtos.ContainerCommandRequestProto.newBuilder();
+request.setDatanodeUuid(pipeline.getFirstNode().getUuidString());
+request.setCmdType(ContainerProtos.Type.CloseContainer);
+request.setContainerID(containerID);
+request.setCloseContainer(
+ContainerProtos.CloseContainerRequestProto.getDefaultInstance());
+xceiverClient.sendCommand(request.build());
+  } catch (IOException e) {
+failCount.incrementAndGet();
+  }
+};
+Runnable r2 = () -> {
+  try {
+xceiverClient.sendCommand(ContainerTestHelper
+.getWriteChunkRequest(pipeline, omKeyLocationInfo.getBlockID(),
+1024, new Random().nextInt()));
+latch.countDown();
+  } catch (IOException e) {
+latch.countDown();
+if (!(HddsClientUtils
+.checkForException(e) instanceof ContainerNotOpenException)) {
+  failCount.incrementAndGet();
+}
+  }
+};
+
+for (int i=0 ; i < 100; i++) {
+  count++;
+  new Thread(r2).start();
 
 Review comment:
   will address in the next patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #1641: HDDS-2281. ContainerStateMachine#handleWriteChunk should ignore close container exception.

2019-10-18 Thread GitBox
bshashikant commented on a change in pull request #1641: HDDS-2281. 
ContainerStateMachine#handleWriteChunk should ignore close container exception.
URL: https://github.com/apache/hadoop/pull/1641#discussion_r336431510
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java
 ##
 @@ -418,6 +424,108 @@ public void 
testApplyTransactionIdempotencyWithClosedContainer()
 Assert.assertFalse(snapshot.getPath().equals(latestSnapshot.getPath()));
   }
 
+  // The test injects multiple write chunk requests along with closed container
+  // request thereby inducing a situation where a writeStateMachine call
+  // gets executed when the closed container apply completes thereby
+  // failing writeStateMachine call. In any case, our stateMachine should
+  // not be marked unhealthy and pipeline should not fail if container gets
+  // closed here.
+  @Test
+  public void testWriteStateMachineDataIdempotencyWithClosedContainer()
+  throws Exception {
+OzoneOutputStream key =
+objectStore.getVolume(volumeName).getBucket(bucketName)
+.createKey("ratis-1", 1024, ReplicationType.RATIS,
+ReplicationFactor.ONE, new HashMap<>());
+// First write and flush creates a container in the datanode
+key.write("ratis".getBytes());
+key.flush();
+key.write("ratis".getBytes());
+KeyOutputStream groupOutputStream = (KeyOutputStream) 
key.getOutputStream();
+List locationInfoList =
+groupOutputStream.getLocationInfoList();
+Assert.assertEquals(1, locationInfoList.size());
+OmKeyLocationInfo omKeyLocationInfo = locationInfoList.get(0);
+ContainerData containerData =
+cluster.getHddsDatanodes().get(0).getDatanodeStateMachine()
+.getContainer().getContainerSet()
+.getContainer(omKeyLocationInfo.getContainerID())
+.getContainerData();
+Assert.assertTrue(containerData instanceof KeyValueContainerData);
+key.close();
+ContainerStateMachine stateMachine =
+(ContainerStateMachine) ContainerTestHelper.getStateMachine(cluster);
+SimpleStateMachineStorage storage =
+(SimpleStateMachineStorage) stateMachine.getStateMachineStorage();
+Path parentPath = storage.findLatestSnapshot().getFile().getPath();
+// Since the snapshot threshold is set to 1, since there are
+// applyTransactions, we should see snapshots
+Assert.assertTrue(parentPath.getParent().toFile().listFiles().length > 0);
+FileInfo snapshot = storage.findLatestSnapshot().getFile();
+Assert.assertNotNull(snapshot);
+long containerID = omKeyLocationInfo.getContainerID();
+Pipeline pipeline = cluster.getStorageContainerLocationClient()
+.getContainerWithPipeline(containerID).getPipeline();
+XceiverClientSpi xceiverClient =
+xceiverClientManager.acquireClient(pipeline);
+CountDownLatch latch = new CountDownLatch(100);
+int count = 0;
+AtomicInteger failCount = new AtomicInteger(0);
+Runnable r1 = () -> {
+  try {
+ContainerProtos.ContainerCommandRequestProto.Builder request =
+ContainerProtos.ContainerCommandRequestProto.newBuilder();
+request.setDatanodeUuid(pipeline.getFirstNode().getUuidString());
+request.setCmdType(ContainerProtos.Type.CloseContainer);
+request.setContainerID(containerID);
+request.setCloseContainer(
+ContainerProtos.CloseContainerRequestProto.getDefaultInstance());
+xceiverClient.sendCommand(request.build());
+  } catch (IOException e) {
+failCount.incrementAndGet();
+  }
+};
+Runnable r2 = () -> {
+  try {
+xceiverClient.sendCommand(ContainerTestHelper
+.getWriteChunkRequest(pipeline, omKeyLocationInfo.getBlockID(),
+1024, new Random().nextInt()));
+latch.countDown();
+  } catch (IOException e) {
+latch.countDown();
+if (!(HddsClientUtils
+.checkForException(e) instanceof ContainerNotOpenException)) {
+  failCount.incrementAndGet();
+}
+  }
+};
+
+for (int i=0 ; i < 100; i++) {
+  count++;
+  new Thread(r2).start();
+}
+
+new Thread(r1).start();
+latch.await(600, TimeUnit.SECONDS);
+if (failCount.get() > 0) {
+  fail("testWriteStateMachineDataIdempotencyWithClosedContainer failed");
+}
+Assert.assertTrue(
+cluster.getHddsDatanodes().get(0).getDatanodeStateMachine()
+.getContainer().getContainerSet().getContainer(containerID)
+.getContainerState()
+== ContainerProtos.ContainerDataProto.State.CLOSED);
+Assert.assertTrue(stateMachine.isStateMachineHealthy());
+try {
+  stateMachine.takeSnapshot();
+} catch (IOException ioe) {
+  Assert.fail("Exception should not be thrown");
+}
+FileInfo latestSnapshot = 

[GitHub] [hadoop] mukul1987 commented on a change in pull request #1641: HDDS-2281. ContainerStateMachine#handleWriteChunk should ignore close container exception.

2019-10-18 Thread GitBox
mukul1987 commented on a change in pull request #1641: HDDS-2281. 
ContainerStateMachine#handleWriteChunk should ignore close container exception.
URL: https://github.com/apache/hadoop/pull/1641#discussion_r336426044
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java
 ##
 @@ -418,6 +424,108 @@ public void 
testApplyTransactionIdempotencyWithClosedContainer()
 Assert.assertFalse(snapshot.getPath().equals(latestSnapshot.getPath()));
   }
 
+  // The test injects multiple write chunk requests along with closed container
+  // request thereby inducing a situation where a writeStateMachine call
+  // gets executed when the closed container apply completes thereby
+  // failing writeStateMachine call. In any case, our stateMachine should
+  // not be marked unhealthy and pipeline should not fail if container gets
+  // closed here.
+  @Test
+  public void testWriteStateMachineDataIdempotencyWithClosedContainer()
+  throws Exception {
+OzoneOutputStream key =
+objectStore.getVolume(volumeName).getBucket(bucketName)
+.createKey("ratis-1", 1024, ReplicationType.RATIS,
+ReplicationFactor.ONE, new HashMap<>());
+// First write and flush creates a container in the datanode
+key.write("ratis".getBytes());
+key.flush();
+key.write("ratis".getBytes());
+KeyOutputStream groupOutputStream = (KeyOutputStream) 
key.getOutputStream();
+List locationInfoList =
+groupOutputStream.getLocationInfoList();
+Assert.assertEquals(1, locationInfoList.size());
+OmKeyLocationInfo omKeyLocationInfo = locationInfoList.get(0);
+ContainerData containerData =
+cluster.getHddsDatanodes().get(0).getDatanodeStateMachine()
+.getContainer().getContainerSet()
+.getContainer(omKeyLocationInfo.getContainerID())
+.getContainerData();
+Assert.assertTrue(containerData instanceof KeyValueContainerData);
+key.close();
+ContainerStateMachine stateMachine =
+(ContainerStateMachine) ContainerTestHelper.getStateMachine(cluster);
+SimpleStateMachineStorage storage =
+(SimpleStateMachineStorage) stateMachine.getStateMachineStorage();
+Path parentPath = storage.findLatestSnapshot().getFile().getPath();
+// Since the snapshot threshold is set to 1, since there are
+// applyTransactions, we should see snapshots
+Assert.assertTrue(parentPath.getParent().toFile().listFiles().length > 0);
+FileInfo snapshot = storage.findLatestSnapshot().getFile();
+Assert.assertNotNull(snapshot);
+long containerID = omKeyLocationInfo.getContainerID();
+Pipeline pipeline = cluster.getStorageContainerLocationClient()
+.getContainerWithPipeline(containerID).getPipeline();
+XceiverClientSpi xceiverClient =
+xceiverClientManager.acquireClient(pipeline);
+CountDownLatch latch = new CountDownLatch(100);
+int count = 0;
+AtomicInteger failCount = new AtomicInteger(0);
+Runnable r1 = () -> {
+  try {
+ContainerProtos.ContainerCommandRequestProto.Builder request =
+ContainerProtos.ContainerCommandRequestProto.newBuilder();
+request.setDatanodeUuid(pipeline.getFirstNode().getUuidString());
+request.setCmdType(ContainerProtos.Type.CloseContainer);
+request.setContainerID(containerID);
+request.setCloseContainer(
+ContainerProtos.CloseContainerRequestProto.getDefaultInstance());
+xceiverClient.sendCommand(request.build());
+  } catch (IOException e) {
+failCount.incrementAndGet();
+  }
+};
+Runnable r2 = () -> {
+  try {
+xceiverClient.sendCommand(ContainerTestHelper
+.getWriteChunkRequest(pipeline, omKeyLocationInfo.getBlockID(),
+1024, new Random().nextInt()));
+latch.countDown();
+  } catch (IOException e) {
+latch.countDown();
+if (!(HddsClientUtils
+.checkForException(e) instanceof ContainerNotOpenException)) {
+  failCount.incrementAndGet();
+}
+  }
+};
+
+for (int i=0 ; i < 100; i++) {
+  count++;
+  new Thread(r2).start();
+}
+
+new Thread(r1).start();
+latch.await(600, TimeUnit.SECONDS);
+if (failCount.get() > 0) {
+  fail("testWriteStateMachineDataIdempotencyWithClosedContainer failed");
+}
+Assert.assertTrue(
+cluster.getHddsDatanodes().get(0).getDatanodeStateMachine()
+.getContainer().getContainerSet().getContainer(containerID)
+.getContainerState()
+== ContainerProtos.ContainerDataProto.State.CLOSED);
+Assert.assertTrue(stateMachine.isStateMachineHealthy());
+try {
+  stateMachine.takeSnapshot();
+} catch (IOException ioe) {
+  Assert.fail("Exception should not be thrown");
+}
+FileInfo latestSnapshot = 

[GitHub] [hadoop] mukul1987 commented on a change in pull request #1641: HDDS-2281. ContainerStateMachine#handleWriteChunk should ignore close container exception.

2019-10-18 Thread GitBox
mukul1987 commented on a change in pull request #1641: HDDS-2281. 
ContainerStateMachine#handleWriteChunk should ignore close container exception.
URL: https://github.com/apache/hadoop/pull/1641#discussion_r336425337
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java
 ##
 @@ -418,6 +424,108 @@ public void 
testApplyTransactionIdempotencyWithClosedContainer()
 Assert.assertFalse(snapshot.getPath().equals(latestSnapshot.getPath()));
   }
 
+  // The test injects multiple write chunk requests along with closed container
+  // request thereby inducing a situation where a writeStateMachine call
+  // gets executed when the closed container apply completes thereby
+  // failing writeStateMachine call. In any case, our stateMachine should
+  // not be marked unhealthy and pipeline should not fail if container gets
+  // closed here.
+  @Test
+  public void testWriteStateMachineDataIdempotencyWithClosedContainer()
+  throws Exception {
+OzoneOutputStream key =
+objectStore.getVolume(volumeName).getBucket(bucketName)
+.createKey("ratis-1", 1024, ReplicationType.RATIS,
+ReplicationFactor.ONE, new HashMap<>());
+// First write and flush creates a container in the datanode
+key.write("ratis".getBytes());
+key.flush();
+key.write("ratis".getBytes());
+KeyOutputStream groupOutputStream = (KeyOutputStream) 
key.getOutputStream();
+List locationInfoList =
+groupOutputStream.getLocationInfoList();
+Assert.assertEquals(1, locationInfoList.size());
+OmKeyLocationInfo omKeyLocationInfo = locationInfoList.get(0);
+ContainerData containerData =
+cluster.getHddsDatanodes().get(0).getDatanodeStateMachine()
+.getContainer().getContainerSet()
+.getContainer(omKeyLocationInfo.getContainerID())
+.getContainerData();
+Assert.assertTrue(containerData instanceof KeyValueContainerData);
+key.close();
+ContainerStateMachine stateMachine =
+(ContainerStateMachine) ContainerTestHelper.getStateMachine(cluster);
+SimpleStateMachineStorage storage =
+(SimpleStateMachineStorage) stateMachine.getStateMachineStorage();
+Path parentPath = storage.findLatestSnapshot().getFile().getPath();
+// Since the snapshot threshold is set to 1, since there are
+// applyTransactions, we should see snapshots
+Assert.assertTrue(parentPath.getParent().toFile().listFiles().length > 0);
+FileInfo snapshot = storage.findLatestSnapshot().getFile();
+Assert.assertNotNull(snapshot);
+long containerID = omKeyLocationInfo.getContainerID();
+Pipeline pipeline = cluster.getStorageContainerLocationClient()
+.getContainerWithPipeline(containerID).getPipeline();
+XceiverClientSpi xceiverClient =
+xceiverClientManager.acquireClient(pipeline);
+CountDownLatch latch = new CountDownLatch(100);
+int count = 0;
+AtomicInteger failCount = new AtomicInteger(0);
+Runnable r1 = () -> {
+  try {
+ContainerProtos.ContainerCommandRequestProto.Builder request =
+ContainerProtos.ContainerCommandRequestProto.newBuilder();
+request.setDatanodeUuid(pipeline.getFirstNode().getUuidString());
+request.setCmdType(ContainerProtos.Type.CloseContainer);
+request.setContainerID(containerID);
+request.setCloseContainer(
+ContainerProtos.CloseContainerRequestProto.getDefaultInstance());
+xceiverClient.sendCommand(request.build());
+  } catch (IOException e) {
+failCount.incrementAndGet();
+  }
+};
+Runnable r2 = () -> {
+  try {
+xceiverClient.sendCommand(ContainerTestHelper
+.getWriteChunkRequest(pipeline, omKeyLocationInfo.getBlockID(),
+1024, new Random().nextInt()));
+latch.countDown();
+  } catch (IOException e) {
+latch.countDown();
+if (!(HddsClientUtils
+.checkForException(e) instanceof ContainerNotOpenException)) {
+  failCount.incrementAndGet();
+}
+  }
+};
+
+for (int i=0 ; i < 100; i++) {
+  count++;
+  new Thread(r2).start();
 
 Review comment:
   we are leaking threads here, lets join on them


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1663: WIP: Better memory info parse patterns, so that only lines with memory slo…

2019-10-18 Thread GitBox
hadoop-yetus commented on issue #1663: WIP: Better memory info parse patterns, 
so that only lines with memory slo…
URL: https://github.com/apache/hadoop/pull/1663#issuecomment-543649637
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 51 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1099 | trunk passed |
   | +1 | compile | 42 | trunk passed |
   | +1 | checkstyle | 35 | trunk passed |
   | +1 | mvnsite | 45 | trunk passed |
   | +1 | shadedclient | 824 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 47 | trunk passed |
   | 0 | spotbugs | 101 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 100 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 40 | the patch passed |
   | +1 | compile | 36 | the patch passed |
   | +1 | javac | 36 | the patch passed |
   | -0 | checkstyle | 27 | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: 
The patch generated 1 new + 37 unchanged - 1 fixed = 38 total (was 38) |
   | +1 | mvnsite | 38 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 787 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 46 | the patch passed |
   | +1 | findbugs | 106 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 235 | hadoop-yarn-common in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3683 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1663/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1663 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e3486a2c0b4e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 54dc6b7 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1663/1/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1663/1/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1663/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1662: YARN-9913. In YARN ui2 attempt container tab, The Container's ElapsedTime of running Application is incorrect when the browser and the yarn ser

2019-10-18 Thread GitBox
hadoop-yetus commented on issue #1662: YARN-9913. In YARN ui2 attempt container 
tab, The Container's ElapsedTime of  running Application is incorrect when the 
browser and the yarn server are in different timezons.
URL: https://github.com/apache/hadoop/pull/1662#issuecomment-543631601
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1312 | trunk passed |
   | +1 | shadedclient | 982 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 12 | the patch passed |
   | -1 | jshint | 220 | The patch generated 1761 new + 0 unchanged - 0 fixed = 
1761 total (was 0) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 866 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3419 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1662/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1662 |
   | Optional Tests | dupname asflicense shadedclient jshint |
   | uname | Linux dad8fc6eeafa 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 54dc6b7 |
   | jshint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1662/1/artifact/out/diff-patch-jshint.txt
 |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1662/1/console |
   | versions | git=2.7.4 maven=3.3.9 jshint=2.10.2 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jvimr opened a new pull request #1663: WIP: Better memory info parse patterns, so that only lines with memory slo…

2019-10-18 Thread GitBox
jvimr opened a new pull request #1663: WIP: Better memory info parse patterns, 
so that only lines with memory slo…
URL: https://github.com/apache/hadoop/pull/1663
 
 
   …ts are parsed and others (like VmFlags) are ignored
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cjn082030 opened a new pull request #1662: YARN-9913. In YARN ui2 attempt container tab, The Container's ElapsedTime of running Application is incorrect when the browser and the yarn

2019-10-18 Thread GitBox
cjn082030 opened a new pull request #1662: YARN-9913. In YARN ui2 attempt 
container tab, The Container's ElapsedTime of  running Application is incorrect 
when the browser and the yarn server are in different timezons.
URL: https://github.com/apache/hadoop/pull/1662
 
 
   ### what is the problem
   In YARN ui2 attempt container tab, The Container's ElapsedTime of running 
Application is incorrect when the browser and the yarn server are in different 
timezons.
   
   ### how to fix
   Use the RestAPI's returnValue ElapsedTime as the container‘s ElapsedTime. ( 
Instead of using Date.now() of the browser)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org