[GitHub] [hadoop] Jing9 commented on a change in pull request #2738: HDFS-15842. HDFS mover to emit metrics.

2021-03-07 Thread GitBox


Jing9 commented on a change in pull request #2738:
URL: https://github.com/apache/hadoop/pull/2738#discussion_r589216233



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
##
@@ -649,6 +664,11 @@ static int run(Map> namenodes, 
Configuration conf)
 Map> excludedPinnedBlocks = new HashMap<>();
 LOG.info("namenodes = " + namenodes);
 
+DefaultMetricsSystem.initialize("Mover");

Review comment:
   Does Balancer have a similar metrics? If not shall we use this PR or 
create a new one to do that?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/package-info.java
##
@@ -0,0 +1,27 @@
+/**

Review comment:
   Do we need to add this file?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17548) ABFS: Config for Mkdir overwrite

2021-03-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17548?focusedWorklogId=562159=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-562159
 ]

ASF GitHub Bot logged work on HADOOP-17548:
---

Author: ASF GitHub Bot
Created on: 08/Mar/21 07:27
Start Date: 08/Mar/21 07:27
Worklog Time Spent: 10m 
  Work Description: sumangala-patki edited a comment on pull request #2729:
URL: https://github.com/apache/hadoop/pull/2729#issuecomment-791305493


   TEST RESULTS
   
   HNS Account Location: East US 2
   NonHNS Account Location: East US 2, Central US
   Overwrite=true
   
   ```
   HNS OAuth
   
   [INFO] Tests run: 93, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 513, Failures: 0, Errors: 0, Skipped: 70
   [WARNING] Tests run: 257, Failures: 0, Errors: 0, Skipped: 48
   
   HNS SharedKey
   
   [INFO] Tests run: 93, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 513, Failures: 0, Errors: 0, Skipped: 26
   [WARNING] Tests run: 257, Failures: 0, Errors: 0, Skipped: 40
   
   Non-HNS SharedKey
   
   [INFO] Tests run: 93, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 504, Failures: 0, Errors: 0, Skipped: 250
   [WARNING] Tests run: 257, Failures: 0, Errors: 0, Skipped: 40
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 562159)
Time Spent: 40m  (was: 0.5h)

> ABFS: Config for Mkdir overwrite
> 
>
> Key: HADOOP-17548
> URL: https://issues.apache.org/jira/browse/HADOOP-17548
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The call to mkdirs with overwrite set to true results in an additional call 
> to set properties (LMT update, etc) at the backend, which is not required for 
> the HDFS scenario. Moreover, mkdirs on an existing file path returns success. 
> This PR provides an option to set the overwrite parameter to false, and 
> ensures that mkdirs on a file throws an exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sumangala-patki edited a comment on pull request #2729: HADOOP-17548. ABFS: Toggle Store Mkdirs request overwrite parameter

2021-03-07 Thread GitBox


sumangala-patki edited a comment on pull request #2729:
URL: https://github.com/apache/hadoop/pull/2729#issuecomment-791305493


   TEST RESULTS
   
   HNS Account Location: East US 2
   NonHNS Account Location: East US 2, Central US
   Overwrite=true
   
   ```
   HNS OAuth
   
   [INFO] Tests run: 93, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 513, Failures: 0, Errors: 0, Skipped: 70
   [WARNING] Tests run: 257, Failures: 0, Errors: 0, Skipped: 48
   
   HNS SharedKey
   
   [INFO] Tests run: 93, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 513, Failures: 0, Errors: 0, Skipped: 26
   [WARNING] Tests run: 257, Failures: 0, Errors: 0, Skipped: 40
   
   Non-HNS SharedKey
   
   [INFO] Tests run: 93, Failures: 0, Errors: 0, Skipped: 0
   [WARNING] Tests run: 504, Failures: 0, Errors: 0, Skipped: 250
   [WARNING] Tests run: 257, Failures: 0, Errors: 0, Skipped: 40
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jianghuazhu commented on pull request #2741: HDFS-15855.Solve the problem of incorrect EC progress when loading FsImage.

2021-03-07 Thread GitBox


jianghuazhu commented on pull request #2741:
URL: https://github.com/apache/hadoop/pull/2741#issuecomment-792510458


   @jojochuang , thanks for your comment.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15880) WASB doesn't honor fs.trash.interval and this fails to auto purge trash folder

2021-03-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15880?focusedWorklogId=562148=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-562148
 ]

ASF GitHub Bot logged work on HADOOP-15880:
---

Author: ASF GitHub Bot
Created on: 08/Mar/21 06:35
Start Date: 08/Mar/21 06:35
Worklog Time Spent: 10m 
  Work Description: lamber-ken closed pull request #2750:
URL: https://github.com/apache/hadoop/pull/2750


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 562148)
Time Spent: 20m  (was: 10m)

> WASB doesn't honor fs.trash.interval and this fails to auto purge trash folder
> --
>
> Key: HADOOP-15880
> URL: https://issues.apache.org/jira/browse/HADOOP-15880
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs/azure
>Affects Versions: 2.7.3
> Environment: Any HDInsigth cluster pointing to WASB. 
>Reporter: Sunil Kumar Chakrapani
>Priority: Minor
>  Labels: WASB, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> when "fs.trash.interval" is set to a value,  trash for the local hdfs got 
> cleared where as the trash folder on WASB doesn't get deleted and the files 
> get piled up on WASB store..
> WASB doesn't pick up  fs.trash.interval value and this fails to auto purge 
> trash folder on WASB store.
>  
> *Issue : WASB doesn't honor fs.trash.interval and this fails to auto purge 
> trash folder*
> *Steps to reproduce Scenario:*
> *Delete any file stored on HDFS*
> hdfs dfs -D "fs.default.name=hdfs://mycluster/" -rm /hivestore.txt
> 18/10/23 06:18:05 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://mycluster/hivestore.txt' to trash at: 
> hdfs://mycluster/user/sshuser/.Trash/Current/hivestore.txt
> *When deleted the file is moved to trash folder* 
> hdfs dfs -rm wasb:///hivestore.txt
> 18/10/23 06:19:13 INFO fs.TrashPolicyDefault: Moved: 
> 'wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/hivestore.txt'
>  to trash at: 
> wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/user/sshuser/.Trash/Current/hivestore.txt
> *Reduced the fs.trash.interval from 360 to 1 and restarted all related 
> services.*
> *Trash for the local hdfs gets cleared honoring the "fs.trash.interval" 
> value.*
> hdfs dfs -D "fs.default.name=hdfs://mycluster/" -ls 
> hdfs://mycluster/user/sshuser/.Trash/Current/
> ls: File hdfs://mycluster/user/sshuser/.Trash/Current does not exist.
> *Where as the trash for WASB doesn't get cleared.*
> hdfs dfs -ls 
> wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/user/sshuser/.Trash/Current/
> Found 1 items
> -rw-r--r-- 1 sshuser supergroup 1084 2018-10-23 06:19 
> wasb://kcspark-2018-10-18t17-07-40-5...@kcdnsproxy.blob.core.windows.net/user/sshuser/.Trash/Current/hivestore.txt
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lamber-ken closed pull request #2750: HADOOP-15880. Reduce redundant end logsegment rpc

2021-03-07 Thread GitBox


lamber-ken closed pull request #2750:
URL: https://github.com/apache/hadoop/pull/2750


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #2748: HDFS-15879. Exclude slow nodes when choose targets for blocks

2021-03-07 Thread GitBox


tomscut commented on pull request #2748:
URL: https://github.com/apache/hadoop/pull/2748#issuecomment-792498672


   Hi @Hexiaoqiao , could you please help review the code? Thank you.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17532) Yarn Job execution get failed when LZ4 Compression Codec is used

2021-03-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17297091#comment-17297091
 ] 

Hadoop QA commented on HADOOP-17532:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
31s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
32s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
41m 35s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green}{color} | {color:green} The patch has no ill-formed 
XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 46s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green}{color} | {color:green} hadoop-kafka in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense 

[GitHub] [hadoop] tomscut commented on pull request #2752: HDFS-15883. Add a metric BlockReportQueueFullCount

2021-03-07 Thread GitBox


tomscut commented on pull request #2752:
URL: https://github.com/apache/hadoop/pull/2752#issuecomment-792494713


   Thanks @Hexiaoqiao for your advice. This metric 
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics#blockOpsQueued 
can reflect the length of the block report queue.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17532) Yarn Job execution get failed when LZ4 Compression Codec is used

2021-03-07 Thread Bhavik Patel (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17297070#comment-17297070
 ] 

Bhavik Patel commented on HADOOP-17532:
---

[~csun]/ [~viirya] I have attached the *.002.patch, which contains the only 
exclude older version(1.2.0) of a jar from the Kafka client dependency.  

> Yarn Job execution get failed when LZ4 Compression Codec is used
> 
>
> Key: HADOOP-17532
> URL: https://issues.apache.org/jira/browse/HADOOP-17532
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bhavik Patel
>Priority: Major
> Attachments: HADOOP-17532.001.patch, HADOOP-17532.002.patch, LZ4.png, 
> lz4-test.jpg
>
>
> When we try to compress a file using the LZ4 codec compression type then the 
> yarn job gets failed with the error message :
> {code:java}
> net.jpountz.lz4.LZ4Compressorcompres(Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)V
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17532) Yarn Job execution get failed when LZ4 Compression Codec is used

2021-03-07 Thread Bhavik Patel (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bhavik Patel updated HADOOP-17532:
--
Attachment: HADOOP-17532.002.patch

> Yarn Job execution get failed when LZ4 Compression Codec is used
> 
>
> Key: HADOOP-17532
> URL: https://issues.apache.org/jira/browse/HADOOP-17532
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bhavik Patel
>Priority: Major
> Attachments: HADOOP-17532.001.patch, HADOOP-17532.002.patch, LZ4.png, 
> lz4-test.jpg
>
>
> When we try to compress a file using the LZ4 codec compression type then the 
> yarn job gets failed with the error message :
> {code:java}
> net.jpountz.lz4.LZ4Compressorcompres(Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)V
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #2752: HDFS-15883. Add a metric BlockReportQueueFullCount

2021-03-07 Thread GitBox


Hexiaoqiao commented on pull request #2752:
URL: https://github.com/apache/hadoop/pull/2752#issuecomment-792454800


   Thanks @tomscut for your work. IIUC, there has been one metric for number of 
blockReports and blockReceivedAndDeleted queued, please refer to 
`org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics#blockOpsQueued`,
 is it helpful for you?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sungpeo commented on a change in pull request #1942: MAPREDUCE-7270. TestHistoryViewerPrinter could be failed when the locale isn't English.

2021-03-07 Thread GitBox


sungpeo commented on a change in pull request #1942:
URL: https://github.com/apache/hadoop/pull/1942#discussion_r589162197



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestHistoryViewerPrinter.java
##
@@ -43,6 +46,19 @@
 
   private final String LINE_SEPARATOR = System.lineSeparator();
 
+  private static Locale DEFAULT_LOCALE;

Review comment:
   @liuml07 
   I think the static DEFAULT_LOCALE which could be stale during other test 
suites or settings,
because, the DEFAULT_LOCALE will be set before all of the tests.
   
   What do you think about?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] qizhu-lucas commented on pull request #2744: HDFS-15874: Extend TopMetrics to support callerContext aggregation.

2021-03-07 Thread GitBox


qizhu-lucas commented on pull request #2744:
URL: https://github.com/apache/hadoop/pull/2744#issuecomment-792447878


   @Hexiaoqiao  @ayushtkn @jojochuang 
   If you could help review this?
   Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on pull request #2746: HDFS-15875. Check whether file is being truncated before truncate

2021-03-07 Thread GitBox


ferhui commented on pull request #2746:
URL: https://github.com/apache/hadoop/pull/2746#issuecomment-792434235


   @ayushtkn Thanks for review ! Will commit the fix soon!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #2739: HDFS-15870. Remove unused configuration dfs.namenode.stripe.min

2021-03-07 Thread GitBox


tomscut commented on pull request #2739:
URL: https://github.com/apache/hadoop/pull/2739#issuecomment-792432649


   > @tomscut Sorry, we cannot modify commit logs once we pushed them.
   
   OK, thank you for your answer.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #2748: HDFS-15879. Exclude slow nodes when choose targets for blocks

2021-03-07 Thread GitBox


tomscut commented on pull request #2748:
URL: https://github.com/apache/hadoop/pull/2748#issuecomment-792431167


   Failed junit tests:
   hadoop.hdfs.server.namenode.TestFsck 
   
   This failed unit tests was unrelated to the change, and it worked fine 
locally.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #2741: HDFS-15855.Solve the problem of incorrect EC progress when loading FsImage.

2021-03-07 Thread GitBox


jojochuang commented on pull request #2741:
URL: https://github.com/apache/hadoop/pull/2741#issuecomment-792431132


   According to Jenkins the only test that failed was TestBalancer, which is 
unrelated to this patch.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2745: Test YETUS-1102 (Add an option to comment to GitHub PR)

2021-03-07 Thread GitBox


aajisaka commented on pull request #2745:
URL: https://github.com/apache/hadoop/pull/2745#issuecomment-792415272


   YETUS-1102 is not merged into Yetus main branch. I'll push this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma commented on pull request #2739: HDFS-15870. Remove unused configuration dfs.namenode.stripe.min

2021-03-07 Thread GitBox


tasanuma commented on pull request #2739:
URL: https://github.com/apache/hadoop/pull/2739#issuecomment-792414503


   @tomscut Sorry, we cannot modify commit logs once we pushed them.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #2743: HDFS-15873. Add namenode address in logs for block report

2021-03-07 Thread GitBox


tomscut commented on pull request #2743:
URL: https://github.com/apache/hadoop/pull/2743#issuecomment-792414276


   Failed junit tests:
   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
   
   Those failed unit tests were unrelated to the change, and they worked fine 
locally.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #2752: HDFS-15883. Add a metric BlockReportQueueFullCount

2021-03-07 Thread GitBox


tomscut commented on pull request #2752:
URL: https://github.com/apache/hadoop/pull/2752#issuecomment-792406701


   Failed junit tests:
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.balancer.TestBalancer 
   
   Those failed unit tests were unrelated to the change, and they worked fine 
locally.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17568) Mapred/YARN job fails due to kms-dt can't be found in cache with LoadBalancingKMSClientProvider + Kerberos

2021-03-07 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17297017#comment-17297017
 ] 

Akira Ajisaka commented on HADOOP-17568:


I could run MapReduce jobs successfully with multiple KMS instances in Hadoop 
3.3.0. What do you set the following parameters in your kms-site?

* hadoop.kms.authentication.signer.secret.provider
* hadoop.kms.authentication.signer.secret.provider.zookeeper.path
* hadoop.kms.authentication.signer.secret.provider.zookeeper.connection.string
* hadoop.kms.authentication.signer.secret.provider.zookeeper.auth.type

Note that if the hadoop.kms.authentication.signer.secret.provider.auth.type is 
kerberos, 
hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.keytab and 
hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.principal 
are required.

> Mapred/YARN job fails due to kms-dt can't be found in cache with 
> LoadBalancingKMSClientProvider + Kerberos
> --
>
> Key: HADOOP-17568
> URL: https://issues.apache.org/jira/browse/HADOOP-17568
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 3.2.2
>Reporter: Zbigniew Kostrzewa
>Priority: Major
>
> I deployed Hadoop 3.2.2 cluster with KMS in HA using 
> LoadBalancingKMSClientProvider with Kerberos authentication. KMS instances 
> are configured with ZooKeeper for storing the shared secret.
> I have created an encryption key and an encryption zone in `/test` directory 
> and executed `randomtextwriter` from mapreduce examples passing it a 
> sub-directory in the encryption zone:
> {code:java}
> hadoop jar hadoop-mapreduce-examples-3.2.2.jar randomtextwriter 
> /test/randomtextwriter
> {code}
> Unfortunately the job keeps failing with errors like:
> {code:java}
> java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt 
> owner=packer, renewer=packer, realUser=, issueDate=1615146155993, 
> maxDate=1615750955993, sequenceNumber=1, masterKeyId=2) can't be found in 
> cache
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:363)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.decryptEncryptedDataEncryptionKey(HdfsKMSUtil.java:212)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:972)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:952)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:536)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:544)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:471)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1125)
>   at 
> org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1168)
>   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:285)
>   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:542)
>   at 
> org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getSequenceWriter(SequenceFileOutputFormat.java:64)
>   at 
> org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:75)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.(MapTask.java:659)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:779)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt 
> owner=packer, renewer=packer, realUser=, issueDate=1615146155993, 
> maxDate=1615750955993, sequenceNumber=1, masterKeyId=2) can't be found in 
> cache
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

[jira] [Created] (HADOOP-17569) Building native code fails on Fedora 33

2021-03-07 Thread Kengo Seki (Jira)
Kengo Seki created HADOOP-17569:
---

 Summary: Building native code fails on Fedora 33
 Key: HADOOP-17569
 URL: https://issues.apache.org/jira/browse/HADOOP-17569
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, common
Reporter: Kengo Seki


I tried to build native code on Fedora 33, in which glibc 2.32 is installed by 
default, but it failed with the following error.
{code:java}
$ cat /etc/redhat-release 
Fedora release 33 (Thirty Three)
$ sudo dnf info --installed glibc
Installed Packages
Name : glibc
Version  : 2.32
Release  : 1.fc33
Architecture : x86_64
Size : 17 M
Source   : glibc-2.32-1.fc33.src.rpm
Repository   : @System
>From repo: anaconda
Summary  : The GNU libc libraries
URL  : http://www.gnu.org/software/glibc/
License  : LGPLv2+ and LGPLv2+ with exceptions and GPLv2+ and GPLv2+ with 
exceptions and BSD and Inner-Net and ISC and Public Domain and GFDL
Description  : The glibc package contains standard libraries which are used by
 : multiple programs on the system. In order to save disk space and
 : memory, as well as to make upgrading easier, common system code 
is
 : kept in one place and shared between programs. This particular 
package
 : contains the most important sets of shared libraries: the 
standard C
 : library and the standard math library. Without these two 
libraries, a
 : Linux system will not function.

$ mvn clean compile -Pnative

...

[INFO] Running make -j 1 VERBOSE=1
[WARNING] /usr/bin/cmake 
-S/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src 
-B/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native 
--check-build-system CMakeFiles/Makefile.cmake 0
[WARNING] /usr/bin/cmake -E cmake_progress_start 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native/CMakeFiles
 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native//CMakeFiles/progress.marks
[WARNING] make  -f CMakeFiles/Makefile2 all
[WARNING] make[1]: Entering directory 
'/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native'
[WARNING] make  -f CMakeFiles/hadoop_static.dir/build.make 
CMakeFiles/hadoop_static.dir/depend
[WARNING] make[2]: Entering directory 
'/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native'
[WARNING] cd 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native && 
/usr/bin/cmake -E cmake_depends "Unix Makefiles" 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native/CMakeFiles/hadoop_static.dir/DependInfo.cmake
 --color=
[WARNING] Dependee 
"/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native/CMakeFiles/hadoop_static.dir/DependInfo.cmake"
 is newer than depender 
"/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native/CMakeFiles/hadoop_static.dir/depend.internal".
[WARNING] Dependee 
"/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeDirectoryInformation.cmake"
 is newer than depender 
"/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native/CMakeFiles/hadoop_static.dir/depend.internal".
[WARNING] Scanning dependencies of target hadoop_static
[WARNING] make[2]: Leaving directory 
'/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native'
[WARNING] make  -f CMakeFiles/hadoop_static.dir/build.make 
CMakeFiles/hadoop_static.dir/build
[WARNING] make[2]: Entering directory 
'/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native'
[WARNING] [  2%] Building C object 
CMakeFiles/hadoop_static.dir/main/native/src/exception.c.o
[WARNING] /usr/bin/cc  
-I/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native/javah 
-I/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src/main/native/src 
-I/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src 
-I/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src/src 
-I/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native 
-I/usr/lib/jvm/java-1.8.0/include -I/usr/lib/jvm/java-1.8.0/include/linux 
-I/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util
 -g -O2 -Wall -pthread -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -std=gnu99 -o 
CMakeFiles/hadoop_static.dir/main/native/src/exception.c.o -c 
/home/vagrant/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c
[WARNING] make[2]: Leaving directory 
'/home/vagrant/hadoop/hadoop-common-project/hadoop-common/target/native'
[WARNING] make[1]: Leaving directory 

[jira] [Created] (HADOOP-17568) Mapred/YARN job fails due to kms-dt can't be found in cache with LoadBalancingKMSClientProvider + Kerberos

2021-03-07 Thread Zbigniew Kostrzewa (Jira)
Zbigniew Kostrzewa created HADOOP-17568:
---

 Summary: Mapred/YARN job fails due to kms-dt can't be found in 
cache with LoadBalancingKMSClientProvider + Kerberos
 Key: HADOOP-17568
 URL: https://issues.apache.org/jira/browse/HADOOP-17568
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms, security
Affects Versions: 3.2.2
Reporter: Zbigniew Kostrzewa


I deployed Hadoop 3.2.2 cluster with KMS in HA using 
LoadBalancingKMSClientProvider with Kerberos authentication. KMS instances are 
configured with ZooKeeper for storing the shared secret.

I have created an encryption key and an encryption zone in `/test` directory 
and executed `randomtextwriter` from mapreduce examples passing it a 
sub-directory in the encryption zone:
{code:java}
hadoop jar hadoop-mapreduce-examples-3.2.2.jar randomtextwriter 
/test/randomtextwriter
{code}
Unfortunately the job keeps failing with errors like:
{code:java}
java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt 
owner=packer, renewer=packer, realUser=, issueDate=1615146155993, 
maxDate=1615750955993, sequenceNumber=1, masterKeyId=2) can't be found in cache
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:363)
at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
at 
org.apache.hadoop.hdfs.HdfsKMSUtil.decryptEncryptedDataEncryptionKey(HdfsKMSUtil.java:212)
at 
org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:972)
at 
org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:952)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:536)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:544)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:471)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1125)
at 
org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1168)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:285)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:542)
at 
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getSequenceWriter(SequenceFileOutputFormat.java:64)
at 
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:75)
at 
org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.(MapTask.java:659)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:779)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt 
owner=packer, renewer=packer, realUser=, issueDate=1615146155993, 
maxDate=1615750955993, sequenceNumber=1, masterKeyId=2) can't be found in cache
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:154)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:592)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:540)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:833)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:356)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:352)
at 

[GitHub] [hadoop] lamber-ken edited a comment on pull request #2751: HDFS-15882. Fix incorrectly initializing RandomAccessFile based on configuration options

2021-03-07 Thread GitBox


lamber-ken edited a comment on pull request #2751:
URL: https://github.com/apache/hadoop/pull/2751#issuecomment-792305475


   I think the test failures are not related to the patch, and the failed tests 
succeeded on my local computer.
   
   
![image](https://user-images.githubusercontent.com/20113411/110247557-ca959a00-7fa7-11eb-9443-578c0596589e.png)
   
   
![image](https://user-images.githubusercontent.com/20113411/110246922-95d41380-7fa4-11eb-91f0-fd70c07a3f50.png)
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lamber-ken commented on pull request #2751: HDFS-15882. Fix incorrectly initializing RandomAccessFile based on configuration options

2021-03-07 Thread GitBox


lamber-ken commented on pull request #2751:
URL: https://github.com/apache/hadoop/pull/2751#issuecomment-792305475


   I think the test failures are not related to the patch.
   
![image](https://user-images.githubusercontent.com/20113411/110246922-95d41380-7fa4-11eb-91f0-fd70c07a3f50.png)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut edited a comment on pull request #2739: HDFS-15870. Remove unused configuration dfs.namenode.stripe.min

2021-03-07 Thread GitBox


tomscut edited a comment on pull request #2739:
URL: https://github.com/apache/hadoop/pull/2739#issuecomment-792167093


   Hi @tasanuma , I found the reason here 
[cannot-see-contributions-after-commit-code](https://github.community/t/cannot-see-contributions-after-commit-code/166210/3
   ).
   
   I have set wrong email addresses for 1 pull requests. Can I correct it? 
Looking forward to your reply. Thanks a lot.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut opened a new pull request #2752: HDFS-15883. Add a metric BlockReportQueueFullCount

2021-03-07 Thread GitBox


tomscut opened a new pull request #2752:
URL: https://github.com/apache/hadoop/pull/2752


   JIRA: [HDFS-15883](https://issues.apache.org/jira/browse/HDFS-15883)
   
   Add a metric that reflects the number of times the block report queue is full



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lamber-ken commented on pull request #2751: HDFS-15882. Fix incorrectly initializing RandomAccessFile based on configuration options

2021-03-07 Thread GitBox


lamber-ken commented on pull request #2751:
URL: https://github.com/apache/hadoop/pull/2751#issuecomment-792259640


   Thanks @Hexiaoqiao @leosunli 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] leosunli commented on pull request #2751: HDFS-15882. Fix incorrectly initializing RandomAccessFile based on configuration options

2021-03-07 Thread GitBox


leosunli commented on pull request #2751:
URL: https://github.com/apache/hadoop/pull/2751#issuecomment-792256266


   LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15882) Upgrade maven-shade-plugin from 2.4.3 to 3.2.0

2021-03-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15882?focusedWorklogId=561915=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-561915
 ]

ASF GitHub Bot logged work on HADOOP-15882:
---

Author: ASF GitHub Bot
Created on: 07/Mar/21 08:04
Start Date: 07/Mar/21 08:04
Worklog Time Spent: 10m 
  Work Description: lamber-ken commented on pull request #2751:
URL: https://github.com/apache/hadoop/pull/2751#issuecomment-792235595


   hi @aajisaka, please take a review when you're free, thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 561915)
Time Spent: 20m  (was: 10m)

> Upgrade maven-shade-plugin from 2.4.3 to 3.2.0
> --
>
> Key: HADOOP-15882
> URL: https://issues.apache.org/jira/browse/HADOOP-15882
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.0, 3.0.4, 3.3.0, 3.1.2
>
> Attachments: HADOOP-15882.1.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> While working on HADOOP-15815, we have faced a shaded-client error. Please 
> see [~bharatviswa]'s comment 
> [here|https://issues.apache.org/jira/browse/HADOOP-15815?focusedCommentId=16662718=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16662718].
> MSHADE-242 and MSHADE-258 are needed to fix it. Let's upgrade 
> maven-shade-plugin to 3.1.0 or later.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lamber-ken commented on pull request #2751: HADOOP-15882. Fix incorrectly initializing RandomAccessFile based on configuration options

2021-03-07 Thread GitBox


lamber-ken commented on pull request #2751:
URL: https://github.com/apache/hadoop/pull/2751#issuecomment-792235595


   hi @aajisaka, please take a review when you're free, thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15882) Upgrade maven-shade-plugin from 2.4.3 to 3.2.0

2021-03-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-15882:

Labels: pull-request-available  (was: )

> Upgrade maven-shade-plugin from 2.4.3 to 3.2.0
> --
>
> Key: HADOOP-15882
> URL: https://issues.apache.org/jira/browse/HADOOP-15882
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.0, 3.0.4, 3.3.0, 3.1.2
>
> Attachments: HADOOP-15882.1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> While working on HADOOP-15815, we have faced a shaded-client error. Please 
> see [~bharatviswa]'s comment 
> [here|https://issues.apache.org/jira/browse/HADOOP-15815?focusedCommentId=16662718=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16662718].
> MSHADE-242 and MSHADE-258 are needed to fix it. Let's upgrade 
> maven-shade-plugin to 3.1.0 or later.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15882) Upgrade maven-shade-plugin from 2.4.3 to 3.2.0

2021-03-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15882?focusedWorklogId=561914=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-561914
 ]

ASF GitHub Bot logged work on HADOOP-15882:
---

Author: ASF GitHub Bot
Created on: 07/Mar/21 08:02
Start Date: 07/Mar/21 08:02
Worklog Time Spent: 10m 
  Work Description: lamber-ken opened a new pull request #2751:
URL: https://github.com/apache/hadoop/pull/2751


   ## ISSUE
   https://issues.apache.org/jira/browse/HDFS-15882
   
   ## NOTICE
   
   - `rw` Open for reading and writing.  If the file does not already exist 
then an attempt will be made to create it.
   - `rws` Require that every update to the file's content or metadata be 
written synchronously to the underlying storage device. 
   
   From the literal meaning of this variable `shouldSyncWritesAndSkipFsync`, we 
should use `rws` when shouldSyncWritesAndSkipFsync is false.
   
   We use SATA disk to store the journal node's data. It's not effective for 
improving RPC performance whether the `shouldSyncWritesAndSkipFsync` variable 
is true or false. it's caused by initializing RandomAccessFile incorrectly.
   
   
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 561914)
Remaining Estimate: 0h
Time Spent: 10m

> Upgrade maven-shade-plugin from 2.4.3 to 3.2.0
> --
>
> Key: HADOOP-15882
> URL: https://issues.apache.org/jira/browse/HADOOP-15882
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.3.0, 3.1.2
>
> Attachments: HADOOP-15882.1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> While working on HADOOP-15815, we have faced a shaded-client error. Please 
> see [~bharatviswa]'s comment 
> [here|https://issues.apache.org/jira/browse/HADOOP-15815?focusedCommentId=16662718=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16662718].
> MSHADE-242 and MSHADE-258 are needed to fix it. Let's upgrade 
> maven-shade-plugin to 3.1.0 or later.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lamber-ken opened a new pull request #2751: HADOOP-15882. Fix incorrectly initializing RandomAccessFile based on configuration options

2021-03-07 Thread GitBox


lamber-ken opened a new pull request #2751:
URL: https://github.com/apache/hadoop/pull/2751


   ## ISSUE
   https://issues.apache.org/jira/browse/HDFS-15882
   
   ## NOTICE
   
   - `rw` Open for reading and writing.  If the file does not already exist 
then an attempt will be made to create it.
   - `rws` Require that every update to the file's content or metadata be 
written synchronously to the underlying storage device. 
   
   From the literal meaning of this variable `shouldSyncWritesAndSkipFsync`, we 
should use `rws` when shouldSyncWritesAndSkipFsync is false.
   
   We use SATA disk to store the journal node's data. It's not effective for 
improving RPC performance whether the `shouldSyncWritesAndSkipFsync` variable 
is true or false. it's caused by initializing RandomAccessFile incorrectly.
   
   
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org