[jira] [Work logged] (HADOOP-17371) Bump Jetty to the latest version 9.4.35

2021-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17371?focusedWorklogId=580786=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580786
 ]

ASF GitHub Bot logged work on HADOOP-17371:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 05:58
Start Date: 12/Apr/21 05:58
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on pull request #2590:
URL: https://github.com/apache/hadoop/pull/2590#issuecomment-817506936


   PR #2879 to supersede this one


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580786)
Time Spent: 5h  (was: 4h 50m)

> Bump Jetty to the latest version 9.4.35
> ---
>
> Key: HADOOP-17371
> URL: https://issues.apache.org/jira/browse/HADOOP-17371
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> The Hadoop 3 branches are on 9.4.20. We should update to the latest version: 
> 9.4.34



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17371) Bump Jetty to the latest version 9.4.35

2021-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17371?focusedWorklogId=580787=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580787
 ]

ASF GitHub Bot logged work on HADOOP-17371:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 05:58
Start Date: 12/Apr/21 05:58
Worklog Time Spent: 10m 
  Work Description: jojochuang closed pull request #2590:
URL: https://github.com/apache/hadoop/pull/2590


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580787)
Time Spent: 5h 10m  (was: 5h)

> Bump Jetty to the latest version 9.4.35
> ---
>
> Key: HADOOP-17371
> URL: https://issues.apache.org/jira/browse/HADOOP-17371
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> The Hadoop 3 branches are on 9.4.20. We should update to the latest version: 
> 9.4.34



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang closed pull request #2590: [branch-3.2] Backport HADOOP-17371. Bump Jetty to the latest version 9.4.35.

2021-04-11 Thread GitBox


jojochuang closed pull request #2590:
URL: https://github.com/apache/hadoop/pull/2590


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #2590: [branch-3.2] Backport HADOOP-17371. Bump Jetty to the latest version 9.4.35.

2021-04-11 Thread GitBox


jojochuang commented on pull request #2590:
URL: https://github.com/apache/hadoop/pull/2590#issuecomment-817506936


   PR #2879 to supersede this one


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17614) Bump netty to the latest 4.1.61

2021-04-11 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-17614:
-
Fix Version/s: 3.1.5

> Bump netty to the latest 4.1.61
> ---
>
> Key: HADOOP-17614
> URL: https://issues.apache.org/jira/browse/HADOOP-17614
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> For more details: https://netty.io/news/2021/03/09/4-1-60-Final.html
> Actually, just yesterday there's a new version 4.1.61. 
> https://netty.io/news/2021/03/30/4-1-61-Final.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17614) Bump netty to the latest 4.1.61

2021-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17614?focusedWorklogId=580785=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580785
 ]

ASF GitHub Bot logged work on HADOOP-17614:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 05:55
Start Date: 12/Apr/21 05:55
Worklog Time Spent: 10m 
  Work Description: jojochuang merged pull request #2871:
URL: https://github.com/apache/hadoop/pull/2871


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580785)
Time Spent: 2.5h  (was: 2h 20m)

> Bump netty to the latest 4.1.61
> ---
>
> Key: HADOOP-17614
> URL: https://issues.apache.org/jira/browse/HADOOP-17614
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.2.3
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> For more details: https://netty.io/news/2021/03/09/4-1-60-Final.html
> Actually, just yesterday there's a new version 4.1.61. 
> https://netty.io/news/2021/03/30/4-1-61-Final.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #2871: HADOOP-17614. Bump netty to the latest 4.1.61.

2021-04-11 Thread GitBox


jojochuang merged pull request #2871:
URL: https://github.com/apache/hadoop/pull/2871


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17611?focusedWorklogId=580780=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580780
 ]

ASF GitHub Bot logged work on HADOOP-17611:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 05:27
Start Date: 12/Apr/21 05:27
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #2892:
URL: https://github.com/apache/hadoop/pull/2892#discussion_r611335147



##
File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
##
@@ -635,7 +635,10 @@ private void concatFileChunks(Configuration conf, Path 
sourceFile,
 ++i;
   }
 }
+long firstChunkLastModifiedTs = dstfs.getFileStatus(firstChunkFile)
+.getModificationTime();
 dstfs.concat(firstChunkFile, restChunkFiles);
+dstfs.setTimes(firstChunkFile, firstChunkLastModifiedTs, -1);

Review comment:
   I think you should update mtime after rename() (below) instead. The HDFS 
rename updates mtime too. Not sure about other file system implementations but 
it's safe to assume that being the case.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580780)
Time Spent: 40m  (was: 0.5h)

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17611?focusedWorklogId=580779=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580779
 ]

ASF GitHub Bot logged work on HADOOP-17611:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 05:26
Start Date: 12/Apr/21 05:26
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #2892:
URL: https://github.com/apache/hadoop/pull/2892#discussion_r611335147



##
File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
##
@@ -635,7 +635,10 @@ private void concatFileChunks(Configuration conf, Path 
sourceFile,
 ++i;
   }
 }
+long firstChunkLastModifiedTs = dstfs.getFileStatus(firstChunkFile)
+.getModificationTime();
 dstfs.concat(firstChunkFile, restChunkFiles);
+dstfs.setTimes(firstChunkFile, firstChunkLastModifiedTs, -1);

Review comment:
   I think you should update mtime after rename(). The HDFS rename updates 
mtime too. Not sure about other file system implementations but it's safe to 
assume that being the case.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580779)
Time Spent: 0.5h  (was: 20m)

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #2892: HADOOP-17611. Distcp parallel file copy should retain first chunk modifiedTime after concat

2021-04-11 Thread GitBox


jojochuang commented on a change in pull request #2892:
URL: https://github.com/apache/hadoop/pull/2892#discussion_r611335147



##
File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
##
@@ -635,7 +635,10 @@ private void concatFileChunks(Configuration conf, Path 
sourceFile,
 ++i;
   }
 }
+long firstChunkLastModifiedTs = dstfs.getFileStatus(firstChunkFile)
+.getModificationTime();
 dstfs.concat(firstChunkFile, restChunkFiles);
+dstfs.setTimes(firstChunkFile, firstChunkLastModifiedTs, -1);

Review comment:
   I think you should update mtime after rename() (below) instead. The HDFS 
rename updates mtime too. Not sure about other file system implementations but 
it's safe to assume that being the case.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #2892: HADOOP-17611. Distcp parallel file copy should retain first chunk modifiedTime after concat

2021-04-11 Thread GitBox


jojochuang commented on a change in pull request #2892:
URL: https://github.com/apache/hadoop/pull/2892#discussion_r611335147



##
File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
##
@@ -635,7 +635,10 @@ private void concatFileChunks(Configuration conf, Path 
sourceFile,
 ++i;
   }
 }
+long firstChunkLastModifiedTs = dstfs.getFileStatus(firstChunkFile)
+.getModificationTime();
 dstfs.concat(firstChunkFile, restChunkFiles);
+dstfs.setTimes(firstChunkFile, firstChunkLastModifiedTs, -1);

Review comment:
   I think you should update mtime after rename(). The HDFS rename updates 
mtime too. Not sure about other file system implementations but it's safe to 
assume that being the case.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs

2021-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17618?focusedWorklogId=580777=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580777
 ]

ASF GitHub Bot logged work on HADOOP-17618:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 05:05
Start Date: 12/Apr/21 05:05
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2845:
URL: https://github.com/apache/hadoop/pull/2845#discussion_r611329081



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -227,11 +224,26 @@ public String getLogString() {
   .append(" m=")
   .append(method)
   .append(" u=")
-  .append(getSignatureMaskedEncodedUrl());
+  .append(getMaskedEncodedUrl());
 
 return sb.toString();
   }
 
+  public String getMaskedUrl() {
+if (maskedUrl != null) {
+  return maskedUrl;
+}
+maskedUrl = UriUtils.getMaskedUrl(url.toString());
+return maskedUrl;
+  }
+
+  public String getMaskedEncodedUrl() {
+if (maskedEncodedUrl != null) {
+  return maskedEncodedUrl;
+}
+return UriUtils.encodedUrlStr(getMaskedUrl());

Review comment:
   assign the value to this.maskedEncodedUrl, then return the same.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580777)
Time Spent: 2.5h  (was: 2h 20m)

> ABFS: Partially obfuscate SAS object IDs in Logs
> 
>
> Key: HADOOP-17618
> URL: https://issues.apache.org/jira/browse/HADOOP-17618
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Sumangala Patki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Delegation SAS tokens are created using various parameters for specifying 
> details such as permissions and validity. The requests are logged, along with 
> values of all the query parameters. This change will partially mask values 
> logged for the following object IDs representing the security principal: 
> skoid, saoid, suoid



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17618) ABFS: Partially obfuscate SAS object IDs in Logs

2021-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17618?focusedWorklogId=580776=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580776
 ]

ASF GitHub Bot logged work on HADOOP-17618:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 05:04
Start Date: 12/Apr/21 05:04
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2845:
URL: https://github.com/apache/hadoop/pull/2845#discussion_r611329008



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -227,11 +224,26 @@ public String getLogString() {
   .append(" m=")
   .append(method)
   .append(" u=")
-  .append(getSignatureMaskedEncodedUrl());
+  .append(getMaskedEncodedUrl());
 
 return sb.toString();
   }
 
+  public String getMaskedUrl() {
+if (maskedUrl != null) {
+  return maskedUrl;

Review comment:
   this. , for consistency reasons




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580776)
Time Spent: 2h 20m  (was: 2h 10m)

> ABFS: Partially obfuscate SAS object IDs in Logs
> 
>
> Key: HADOOP-17618
> URL: https://issues.apache.org/jira/browse/HADOOP-17618
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Sumangala Patki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Delegation SAS tokens are created using various parameters for specifying 
> details such as permissions and validity. The requests are logged, along with 
> values of all the query parameters. This change will partially mask values 
> logged for the following object IDs representing the security principal: 
> skoid, saoid, suoid



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #2845: HADOOP-17618. ABFS: Partially obfuscate SAS object IDs in Logs

2021-04-11 Thread GitBox


bilaharith commented on a change in pull request #2845:
URL: https://github.com/apache/hadoop/pull/2845#discussion_r611329081



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -227,11 +224,26 @@ public String getLogString() {
   .append(" m=")
   .append(method)
   .append(" u=")
-  .append(getSignatureMaskedEncodedUrl());
+  .append(getMaskedEncodedUrl());
 
 return sb.toString();
   }
 
+  public String getMaskedUrl() {
+if (maskedUrl != null) {
+  return maskedUrl;
+}
+maskedUrl = UriUtils.getMaskedUrl(url.toString());
+return maskedUrl;
+  }
+
+  public String getMaskedEncodedUrl() {
+if (maskedEncodedUrl != null) {
+  return maskedEncodedUrl;
+}
+return UriUtils.encodedUrlStr(getMaskedUrl());

Review comment:
   assign the value to this.maskedEncodedUrl, then return the same.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #2845: HADOOP-17618. ABFS: Partially obfuscate SAS object IDs in Logs

2021-04-11 Thread GitBox


bilaharith commented on a change in pull request #2845:
URL: https://github.com/apache/hadoop/pull/2845#discussion_r611329008



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java
##
@@ -227,11 +224,26 @@ public String getLogString() {
   .append(" m=")
   .append(method)
   .append(" u=")
-  .append(getSignatureMaskedEncodedUrl());
+  .append(getMaskedEncodedUrl());
 
 return sb.toString();
   }
 
+  public String getMaskedUrl() {
+if (maskedUrl != null) {
+  return maskedUrl;

Review comment:
   this. , for consistency reasons




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17548) ABFS: Toggle Store Mkdirs request overwrite parameter

2021-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17548?focusedWorklogId=580775=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580775
 ]

ASF GitHub Bot logged work on HADOOP-17548:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 05:02
Start Date: 12/Apr/21 05:02
Worklog Time Spent: 10m 
  Work Description: sumangala-patki commented on pull request #2781:
URL: https://github.com/apache/hadoop/pull/2781#issuecomment-817481302


   TEST RESULTS
   
   HNS Account Location: East US 2
   NonHNS Account Location: East US 2, Central US
   
   ```
   HNS-OAuth
   
   [INFO] Tests run: 93, Failures: 0, Errors: 0, Skipped: 0
   [ERROR]   
ITestAzureBlobFileSystemFileStatus.testLastModifiedTime:140->Assert.assertTrue:42->Assert.fail:89
 lastModifiedTime should be before createEndTime
   [ERROR] Tests run: 513, Failures: 1, Errors: 0, Skipped: 70
   [ERROR] Errors: 
   [ERROR]   
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:78
 » TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:631
 » TestTimedOut
   [ERROR] Tests run: 261, Failures: 0, Errors: 2, Skipped: 50
   
   HNS-SharedKey
   
   [INFO] Tests run: 93, Failures: 0, Errors: 0, Skipped: 0
   [ERROR] Failures: 
   [ERROR]   
ITestAzureBlobFileSystemFileStatus.testLastModifiedTime:140->Assert.assertTrue:42->Assert.fail:89
 lastModifiedTime should be before createEndTime
   [ERROR] Tests run: 513, Failures: 1, Errors: 0, Skipped: 26
   [ERROR] Errors: 
   [ERROR]   
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:78
 » TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:631
 » TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:631
 » TestTimedOut 
   [ERROR] Tests run: 261, Failures: 0, Errors: 3, Skipped: 40
   
   NonHNS-SharedKey
   
   [INFO] Tests run: 93, Failures: 0, Errors: 0, Skipped: 0
   [ERROR] Failures: 
   [ERROR]   
ITestAzureBlobFileSystemFileStatus.testLastModifiedTime:140->Assert.assertTrue:42->Assert.fail:89
 lastModifiedTime should be before createEndTime
   [ERROR] Tests run: 513, Failures: 1, Errors: 0, Skipped: 248
   [ERROR] Errors: 
   [ERROR]   
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:78
 » TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:631
 » TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:631
 » TestTimedOut
   [ERROR] Tests run: 261, Failures: 0, Errors: 3, Skipped: 40
   ```
   
   Note: The timeouts in the above tests are commonly observed due to network 
issues and are not related to this PR. LMT failure is caused by diff in clock 
time between client and server (transient error).
   Existing JIRAs for tracking the timeout failures in these tests: 
[AbstractContractDistCpTest](https://issues.apache.org/jira/browse/HADOOP-17628),
 [ITestAbfsReadWriteAndSeek](https://issues.apache.org/jira/browse/HADOOP-15702)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580775)
Time Spent: 1h 50m  (was: 1h 40m)

> ABFS: Toggle Store Mkdirs request overwrite parameter
> -
>
> Key: HADOOP-17548
> URL: https://issues.apache.org/jira/browse/HADOOP-17548
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Sumangala Patki
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The call to mkdirs with overwrite set to true results in an additional call 
> to set properties (LMT update, etc) at the backend, which is not required for 
> the HDFS scenario. Moreover, mkdirs on an existing file path returns success. 
> This PR provides an option to set the overwrite parameter to false, and 
> ensures that mkdirs on a file throws an exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sumangala-patki commented on pull request #2781: HADOOP-17548. ABFS: Toggle Store Mkdirs request overwrite parameter (#2729)

2021-04-11 Thread GitBox


sumangala-patki commented on pull request #2781:
URL: https://github.com/apache/hadoop/pull/2781#issuecomment-817481302


   TEST RESULTS
   
   HNS Account Location: East US 2
   NonHNS Account Location: East US 2, Central US
   
   ```
   HNS-OAuth
   
   [INFO] Tests run: 93, Failures: 0, Errors: 0, Skipped: 0
   [ERROR]   
ITestAzureBlobFileSystemFileStatus.testLastModifiedTime:140->Assert.assertTrue:42->Assert.fail:89
 lastModifiedTime should be before createEndTime
   [ERROR] Tests run: 513, Failures: 1, Errors: 0, Skipped: 70
   [ERROR] Errors: 
   [ERROR]   
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:78
 » TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:631
 » TestTimedOut
   [ERROR] Tests run: 261, Failures: 0, Errors: 2, Skipped: 50
   
   HNS-SharedKey
   
   [INFO] Tests run: 93, Failures: 0, Errors: 0, Skipped: 0
   [ERROR] Failures: 
   [ERROR]   
ITestAzureBlobFileSystemFileStatus.testLastModifiedTime:140->Assert.assertTrue:42->Assert.fail:89
 lastModifiedTime should be before createEndTime
   [ERROR] Tests run: 513, Failures: 1, Errors: 0, Skipped: 26
   [ERROR] Errors: 
   [ERROR]   
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:78
 » TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:631
 » TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:631
 » TestTimedOut 
   [ERROR] Tests run: 261, Failures: 0, Errors: 3, Skipped: 40
   
   NonHNS-SharedKey
   
   [INFO] Tests run: 93, Failures: 0, Errors: 0, Skipped: 0
   [ERROR] Failures: 
   [ERROR]   
ITestAzureBlobFileSystemFileStatus.testLastModifiedTime:140->Assert.assertTrue:42->Assert.fail:89
 lastModifiedTime should be before createEndTime
   [ERROR] Tests run: 513, Failures: 1, Errors: 0, Skipped: 248
   [ERROR] Errors: 
   [ERROR]   
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:78
 » TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:631
 » TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:631
 » TestTimedOut
   [ERROR] Tests run: 261, Failures: 0, Errors: 3, Skipped: 40
   ```
   
   Note: The timeouts in the above tests are commonly observed due to network 
issues and are not related to this PR. LMT failure is caused by diff in clock 
time between client and server (transient error).
   Existing JIRAs for tracking the timeout failures in these tests: 
[AbstractContractDistCpTest](https://issues.apache.org/jira/browse/HADOOP-17628),
 [ITestAbfsReadWriteAndSeek](https://issues.apache.org/jira/browse/HADOOP-15702)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17608) Fix TestKMS failure

2021-04-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17608.

Fix Version/s: 3.4.0
   3.3.1
   Resolution: Fixed

Merged #2880 into trunk and branch-3.3.

> Fix TestKMS failure
> ---
>
> Key: HADOOP-17608
> URL: https://issues.apache.org/jira/browse/HADOOP-17608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: flaky-test, pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt]
> The following https tests are flaky:
>  * testStartStopHttpsPseudo
>  * testStartStopHttpsKerberos
>  * testDelegationTokensOpsHttpsPseudo
> {noformat}
> [ERROR] 
> testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS)  
> Time elapsed: 1.354 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17608) Fix TestKMS failure

2021-04-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17608:
---
Summary: Fix TestKMS failure  (was: TestKMS is flaky)

> Fix TestKMS failure
> ---
>
> Key: HADOOP-17608
> URL: https://issues.apache.org/jira/browse/HADOOP-17608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: flaky-test, pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt]
> The following https tests are flaky:
>  * testStartStopHttpsPseudo
>  * testStartStopHttpsKerberos
>  * testDelegationTokensOpsHttpsPseudo
> {noformat}
> [ERROR] 
> testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS)  
> Time elapsed: 1.354 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17608) TestKMS is flaky

2021-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17608?focusedWorklogId=580763=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580763
 ]

ASF GitHub Bot logged work on HADOOP-17608:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 03:54
Start Date: 12/Apr/21 03:54
Worklog Time Spent: 10m 
  Work Description: aajisaka merged pull request #2880:
URL: https://github.com/apache/hadoop/pull/2880


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580763)
Time Spent: 3h  (was: 2h 50m)

> TestKMS is flaky
> 
>
> Key: HADOOP-17608
> URL: https://issues.apache.org/jira/browse/HADOOP-17608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: flaky-test, pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt]
> The following https tests are flaky:
>  * testStartStopHttpsPseudo
>  * testStartStopHttpsKerberos
>  * testDelegationTokensOpsHttpsPseudo
> {noformat}
> [ERROR] 
> testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS)  
> Time elapsed: 1.354 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17632) Please upgrade the log4j dependency to log4j2

2021-04-11 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-17632.
--
Fix Version/s: (was: 3.4.0)
   (was: 3.3.0)
   Resolution: Duplicate

> Please upgrade the log4j dependency to log4j2
> -
>
> Key: HADOOP-17632
> URL: https://issues.apache.org/jira/browse/HADOOP-17632
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0
>Reporter: helen huang
>Priority: Major
>
> The log4j dependency being use by hadoop-common is currently version 1.2.17. 
> Our fortify scan picked up a couple of issues with this dependency. Please 
> upgrade it to the latest version of log4j2 dependencies:
> 
>  org.apache.logging.log4j
>  log4j-api
>  2.14.1
> 
> 
>  org.apache.logging.log4j
>  log4j-core
>  2.14.1
> 
>  
> The slf4j dependency will need to be updated as well after you upgrade log4j 
> to log4j2.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #2880: HADOOP-17608. Fix TestKMS failure

2021-04-11 Thread GitBox


aajisaka merged pull request #2880:
URL: https://github.com/apache/hadoop/pull/2880


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2021-04-11 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17319008#comment-17319008
 ] 

Wei-Chiu Chuang commented on HADOOP-16206:
--

Is it possible to break this up? I.e. Hadoop common, HDFS, and MR. Easier to 
review/commit

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17608) TestKMS is flaky

2021-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17608?focusedWorklogId=580757=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580757
 ]

ASF GitHub Bot logged work on HADOOP-17608:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 02:50
Start Date: 12/Apr/21 02:50
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #2880:
URL: https://github.com/apache/hadoop/pull/2880#issuecomment-817442729


   TestTimelineClient -> YARN-10568


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580757)
Time Spent: 2h 50m  (was: 2h 40m)

> TestKMS is flaky
> 
>
> Key: HADOOP-17608
> URL: https://issues.apache.org/jira/browse/HADOOP-17608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: flaky-test, pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt]
> The following https tests are flaky:
>  * testStartStopHttpsPseudo
>  * testStartStopHttpsKerberos
>  * testDelegationTokensOpsHttpsPseudo
> {noformat}
> [ERROR] 
> testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS)  
> Time elapsed: 1.354 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2880: HADOOP-17608. Fix TestKMS failure

2021-04-11 Thread GitBox


aajisaka commented on pull request #2880:
URL: https://github.com/apache/hadoop/pull/2880#issuecomment-817442729


   TestTimelineClient -> YARN-10568


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17608) TestKMS is flaky

2021-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17608?focusedWorklogId=580756=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580756
 ]

ASF GitHub Bot logged work on HADOOP-17608:
---

Author: ASF GitHub Bot
Created on: 12/Apr/21 02:44
Start Date: 12/Apr/21 02:44
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #2880:
URL: https://github.com/apache/hadoop/pull/2880#issuecomment-817440919


   Thank you @iwasakims!
   
   `SSL_MONITORING_THREAD_NAME` is different from the truststore reloader 
thread, so those tests pass. However, I think the tests are flaky due to the 
same reason as #2828, and we should fix them.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580756)
Time Spent: 2h 40m  (was: 2.5h)

> TestKMS is flaky
> 
>
> Key: HADOOP-17608
> URL: https://issues.apache.org/jira/browse/HADOOP-17608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: flaky-test, pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt]
> The following https tests are flaky:
>  * testStartStopHttpsPseudo
>  * testStartStopHttpsKerberos
>  * testDelegationTokensOpsHttpsPseudo
> {noformat}
> [ERROR] 
> testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS)  
> Time elapsed: 1.354 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2880: HADOOP-17608. Fix TestKMS failure

2021-04-11 Thread GitBox


aajisaka commented on pull request #2880:
URL: https://github.com/apache/hadoop/pull/2880#issuecomment-817440919


   Thank you @iwasakims!
   
   `SSL_MONITORING_THREAD_NAME` is different from the truststore reloader 
thread, so those tests pass. However, I think the tests are flaky due to the 
same reason as #2828, and we should fix them.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2860: Test Pre-Commits.

2021-04-11 Thread GitBox


hadoop-yetus commented on pull request #2860:
URL: https://github.com/apache/hadoop/pull/2860#issuecomment-817406762


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 34s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 43s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  15m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  21m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  19m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 52s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/8/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 143 unchanged - 0 fixed = 144 total (was 
143)  |
   | +1 :green_heart: |  hadolint  |   0m  3s |  |  No new issues.  |
   | +1 :green_heart: |  mvnsite  |   4m 36s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   3m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 39s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  15m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 44s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 47s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 238m  0s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  18m 43s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 19s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 471m 54s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | 

[GitHub] [hadoop] jojochuang commented on a change in pull request #2889: HDFS-15963. Unreleased volume references cause an infinite loop.

2021-04-11 Thread GitBox


jojochuang commented on a change in pull request #2889:
URL: https://github.com/apache/hadoop/pull/2889#discussion_r611265911



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskAsyncLazyPersistService.java
##
@@ -153,16 +154,24 @@ synchronized boolean queryVolume(FsVolumeImpl volume) {
* Execute the task sometime in the future, using ThreadPools.
*/
   synchronized void execute(String storageId, Runnable task) {
-if (executors == null) {
-  throw new RuntimeException(
-  "AsyncLazyPersistService is already shutdown");
-}
-ThreadPoolExecutor executor = executors.get(storageId);
-if (executor == null) {
-  throw new RuntimeException("Cannot find root storage volume with id " +
-  storageId + " for execution of task " + task);
-} else {
-  executor.execute(task);
+try {

Review comment:
   If the task is executed, the reference is closed because 
ReplicaLazyPersistTask#run() encloses it in a try ... with block.
   It is only when the task is not executed that we have to close it explicitly.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
##
@@ -432,6 +432,7 @@
   ris = new ReplicaInputStreams(
   blockIn, checksumIn, volumeRef, fileIoProvider);
 } catch (IOException ioe) {
+  IOUtils.cleanupWithLogger(null, volumeRef);

Review comment:
   this is a good catch.
   
   But even without exceptions, shouldn't we close the reference when the 
BlockSender is closed too?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
##
@@ -1805,4 +1806,38 @@ public void testNotifyNamenodeMissingOrNewBlock() throws 
Exception {
   cluster.shutdown();
 }
   }
+
+  @Test

Review comment:
   We should add a timeout (a class-wide timeout is preferred) here.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetAsyncDiskService.java
##
@@ -167,18 +167,26 @@ synchronized long countPendingDeletions() {
* Execute the task sometime in the future, using ThreadPools.
*/
   synchronized void execute(FsVolumeImpl volume, Runnable task) {
-if (executors == null) {
-  throw new RuntimeException("AsyncDiskService is already shutdown");
-}
-if (volume == null) {
-  throw new RuntimeException("A null volume does not have a executor");
-}
-ThreadPoolExecutor executor = executors.get(volume.getStorageID());
-if (executor == null) {
-  throw new RuntimeException("Cannot find volume " + volume
-  + " for execution of task " + task);
-} else {
-  executor.execute(task);
+try {

Review comment:
   If the task is executed, the reference is closed at the end of 
ReplicaFileDeleteTask#run().
   This part of the code handles the case when the task is not executed that we 
have to close it explicitly.
   
   It might be a good idea to turn the cleanup code ReplicaFileDeleteTask#run()
   `  IOUtils.cleanupWithLogger(null, volumeRef);`
   to the same style as ReplicaLazyPersistTask#run()
   `try (FsVolumeReference ref = volumeRef) {`
   in case that ReplicaFileDeleteTask#run() throws an exception in the middle 
and doesn't close the reference.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
##
@@ -562,4 +565,57 @@ void writeBlock(ExtendedBlock block, 
BlockConstructionStage stage,
 checksum, CachingStrategy.newDefaultStrategy(), false, false,
 null, null, new String[0]);
   }
+
+  @Test
+  public void testReleaseVolumeRefIfExceptionThrown() throws IOException {

Review comment:
   It would be great if we could rewritten this test as a true unit test: A 
BlockSender with mocks. However, given the dependency on the DataNode class, I 
recognize this is not trivial.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
##
@@ -562,4 +565,57 @@ void writeBlock(ExtendedBlock block, 
BlockConstructionStage stage,
 checksum, CachingStrategy.newDefaultStrategy(), false, false,
 null, null, new String[0]);
   }
+
+  @Test
+  public void testReleaseVolumeRefIfExceptionThrown() throws IOException {
+Path file = new Path("dataprotocol.dat");
+int numDataNodes = 1;
+
+Configuration conf = new HdfsConfiguration();
+conf.setInt(DFSConfigKeys.DFS_REPLICATION_KEY, numDataNodes);
+MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(
+numDataNodes).build();
+try {
+  cluster.waitActive();
+  datanode = cluster.getFileSystem().getDataNodeStats(
+  DatanodeReportType.LIVE)[0];
+  dnAddr = 

[jira] [Work logged] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17611?focusedWorklogId=580714=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580714
 ]

ASF GitHub Bot logged work on HADOOP-17611:
---

Author: ASF GitHub Bot
Created on: 11/Apr/21 19:23
Start Date: 11/Apr/21 19:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2892:
URL: https://github.com/apache/hadoop/pull/2892#issuecomment-817359032


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 42s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  92m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2892/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2892 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 6bb31b0d0a0d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6c083276ad81798b7ab2cdc760753de60b67283e |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2892/1/testReport/ |
   | Max. process+thread count | 542 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2892/1/console |
   | versions | git=2.25.1 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2892: HADOOP-17611. Distcp parallel file copy should retain first chunk modifiedTime after concat

2021-04-11 Thread GitBox


hadoop-yetus commented on pull request #2892:
URL: https://github.com/apache/hadoop/pull/2892#issuecomment-817359032


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 42s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  92m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2892/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2892 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 6bb31b0d0a0d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6c083276ad81798b7ab2cdc760753de60b67283e |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2892/1/testReport/ |
   | Max. process+thread count | 542 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2892/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work started] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17611 started by Viraj Jasani.
-
> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17611:
--
Status: Patch Available  (was: In Progress)

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17611:

Labels: pull-request-available  (was: )

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17611?focusedWorklogId=580707=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580707
 ]

ASF GitHub Bot logged work on HADOOP-17611:
---

Author: ASF GitHub Bot
Created on: 11/Apr/21 17:50
Start Date: 11/Apr/21 17:50
Worklog Time Spent: 10m 
  Work Description: virajjasani opened a new pull request #2892:
URL: https://github.com/apache/hadoop/pull/2892


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580707)
Remaining Estimate: 0h
Time Spent: 10m

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Assignee: Viraj Jasani
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani opened a new pull request #2892: HADOOP-17611. Distcp parallel file copy should retain first chunk modifiedTime after concat

2021-04-11 Thread GitBox


virajjasani opened a new pull request #2892:
URL: https://github.com/apache/hadoop/pull/2892


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-17611:
-

Assignee: Viraj Jasani

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Assignee: Viraj Jasani
>Priority: Major
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2860: Test Pre-Commits.

2021-04-11 Thread GitBox


hadoop-yetus commented on pull request #2860:
URL: https://github.com/apache/hadoop/pull/2860#issuecomment-817337091


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/8/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-11 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17318848#comment-17318848
 ] 

Viraj Jasani edited comment on HADOOP-17611 at 4/11/21, 4:00 PM:
-

Ah, my bad. Thanks, it updates both access time and modification time. I was 
searching for  direct usages of INode#setModificationTime(long 
modificationTime) but missed all usages in FSDirAttrOp, it is updating all 
sorts of attributes.

Thanks [~amaroti]


was (Author: vjasani):
Ah, my bad. Thanks, it updates both access time and modification time.

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Priority: Major
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-11 Thread Adam Maroti (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17318851#comment-17318851
 ] 

Adam Maroti commented on HADOOP-17611:
--

Yes, also it is possible to use it to just change one or the other. (Or
obviously both the access time and he modification time simultaneously)

Viraj Jasani (Jira)  ezt írta (időpont: 2021. ápr. 11., V



> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Priority: Major
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-11 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17318848#comment-17318848
 ] 

Viraj Jasani commented on HADOOP-17611:
---

Ah, my bad. Thanks, it updates both access time and modification time.

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Priority: Major
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-11 Thread Adam Maroti (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17318843#comment-17318843
 ] 

Adam Maroti commented on HADOOP-17611:
--

It already has an api for that: Filesystem.setTimes(Path, long, long)

Viraj Jasani (Jira)  ezt írta (időpont: 2021. ápr. 11., V



> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Priority: Major
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-11 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17318836#comment-17318836
 ] 

Viraj Jasani edited comment on HADOOP-17611 at 4/11/21, 3:39 PM:
-

Although this seems interesting that we can retain modificationTime of target 
file by updating it after Filesystem.concat() operation, however I am not sure 
if HDFS really provides (or should provide) API to update File modificationTime 
(that internally updates modificationTime in INode).

[~ayushtkn] [~liuml07] [~tasanuma] [~weichiu] thoughts?


was (Author: vjasani):
Although this seems interesting that we can retain modificationTime of target 
file by updating it after Filesystem.concat() operation, however I am not sure 
if HDFS really provides (or should provide) API to update File modificationTime 
at INode level.

[~ayushtkn] [~liuml07] [~tasanuma] [~weichiu] thoughts?

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Priority: Major
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17611) Distcp parallel file copy breaks the modification time

2021-04-11 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17318836#comment-17318836
 ] 

Viraj Jasani commented on HADOOP-17611:
---

Although this seems interesting that we can retain modificationTime of target 
file by updating it after Filesystem.concat() operation, however I am not sure 
if HDFS really provides (or should provide) API to update File modificationTime 
at INode level.

[~ayushtkn] [~liuml07] [~tasanuma] [~weichiu] thoughts?

> Distcp parallel file copy breaks the modification time
> --
>
> Key: HADOOP-17611
> URL: https://issues.apache.org/jira/browse/HADOOP-17611
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Adam Maroti
>Priority: Major
>
> The commit HADOOP-11794. Enable distcp to copy blocks in parallel. 
> (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of 
> large files.
>  
> In CopyCommitter.java inside concatFileChunks Filesystem.concat is called 
> which changes the modification time therefore the modification times of files 
> copeid by distcp will not match the source files. However this only occurs 
> for large enough files, which are copied by splitting them up by distcp.
> In concatFileChunks before calling concat extract the modification time and 
> apply that to the concatenated result-file after the concat. (probably best 
> -after- before the rename()).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17633) Please upgrade json-smart dependency to the latest version

2021-04-11 Thread helen huang (Jira)
helen huang created HADOOP-17633:


 Summary: Please upgrade json-smart dependency to the latest version
 Key: HADOOP-17633
 URL: https://issues.apache.org/jira/browse/HADOOP-17633
 Project: Hadoop Common
  Issue Type: Improvement
  Components: auth
Affects Versions: 3.2.2, 3.2.1, 3.3.0, 3.4.0
Reporter: helen huang
 Fix For: 3.4.0, 3.3.0


Please upgrade the json-smart dependency to the latest version available.

Currently hadoop-auth is using version 2.3. Fortify scan picked up a security 
issue with this version. Please upgrade to the latest version. 

Thanks!

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] gurshafriri commented on pull request #2608: YARN-10555. missing access check before getAppAttempts

2021-04-11 Thread GitBox


gurshafriri commented on pull request #2608:
URL: https://github.com/apache/hadoop/pull/2608#issuecomment-817311708


   Do you plan to merge this fix, and do you see this as a valid potential 
security issue? 
   If it is, we (at [snyk](https://snyk.io)) would like to add it to our 
vulnerability DB 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17632) Please upgrade the log4j dependency to log4j2

2021-04-11 Thread helen huang (Jira)
helen huang created HADOOP-17632:


 Summary: Please upgrade the log4j dependency to log4j2
 Key: HADOOP-17632
 URL: https://issues.apache.org/jira/browse/HADOOP-17632
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 3.3.0
Reporter: helen huang
 Fix For: 3.4.0, 3.3.0


The log4j dependency being use by hadoop-common is currently version 1.2.17. 
Our fortify scan picked up a couple of issues with this dependency. Please 
upgrade it to the latest version of log4j2 dependencies:



 org.apache.logging.log4j
 log4j-api
 2.14.1



 org.apache.logging.log4j
 log4j-core
 2.14.1


 

The slf4j dependency will need to be updated as well after you upgrade log4j to 
log4j2.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2889: HDFS-15963. Unreleased volume references cause an infinite loop.

2021-04-11 Thread GitBox


hadoop-yetus commented on pull request #2889:
URL: https://github.com/apache/hadoop/pull/2889#issuecomment-817304716


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 56s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/4/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 231 unchanged 
- 0 fixed = 233 total (was 231)  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 382m 34s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 480m 26s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | hadoop.hdfs.TestDFSShell |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestStateAlignmentContextWithHA |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestPersistBlocks |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2889 |
   | Optional Tests | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2889: HDFS-15963. Unreleased volume references cause an infinite loop.

2021-04-11 Thread GitBox


hadoop-yetus commented on pull request #2889:
URL: https://github.com/apache/hadoop/pull/2889#issuecomment-817302324


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 54s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 231 unchanged 
- 0 fixed = 233 total (was 231)  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 373m  6s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  2s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 471m  5s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestStateAlignmentContextWithHA |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
   |   | hadoop.hdfs.server.datanode.TestIncrementalBrVariations |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2889 |
 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2889: HDFS-15963. Unreleased volume references cause an infinite loop.

2021-04-11 Thread GitBox


hadoop-yetus commented on pull request #2889:
URL: https://github.com/apache/hadoop/pull/2889#issuecomment-817297218


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 55s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 231 unchanged 
- 0 fixed = 233 total (was 231)  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 345m 38s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 440m 25s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | hadoop.hdfs.TestStateAlignmentContextWithHA |
   |   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
   |   | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2889 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2889: HDFS-15963. Unreleased volume references cause an infinite loop.

2021-04-11 Thread GitBox


hadoop-yetus commented on pull request #2889:
URL: https://github.com/apache/hadoop/pull/2889#issuecomment-817290354


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 53s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/5/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 243 unchanged 
- 0 fixed = 245 total (was 243)  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 233m 27s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 320m 42s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2889/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2889 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 9aa1ae06644e 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fa37c6a2a87974b3850fa1d3c009ed2d075fc5a7 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
  

[jira] [Commented] (HADOOP-9642) Configuration to resolve environment variables via ${env.VARIABLE} references

2021-04-11 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-9642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17318733#comment-17318733
 ] 

Steve Loughran commented on HADOOP-9642:


Got a followon to this: HADOOP-17631

if env var access is restricted, I want the resolution of ${env.VAR:-FALLBACK} 
to go to evaluation fo the fallback, rather than just return the string 
unexpanded. I think this makes sense in the concept of "fallback" -just treat 
the var as unset; and it allows you to put in env variable refs into 
core-defaults (HADOOP-17386) without worrying about breaking things.

Can anyone see any security implications from such a change? I can't. We'd be 
treating all vars as resolving to null, so there's no info leakage about 
whether a var is set/unset. 

> Configuration to resolve environment variables via ${env.VARIABLE} references
> -
>
> Key: HADOOP-9642
> URL: https://issues.apache.org/jira/browse/HADOOP-9642
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: conf, scripts
>Affects Versions: 2.1.0-beta, 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Kengo Seki
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-9642.001.patch, HADOOP-9642.002.patch
>
>
> We should be able to get env variables from Configuration files, rather than 
> just system properties. I propose using the traditional {{env}} prefix 
> {{${env.PATH}}} to make it immediately clear to people reading a conf file 
> that it's an env variable -and to avoid any confusion with system properties 
> and existing configuration properties.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2860: Test Pre-Commits.

2021-04-11 Thread GitBox


hadoop-yetus commented on pull request #2860:
URL: https://github.com/apache/hadoop/pull/2860#issuecomment-817270989


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/7/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org