[jira] [Created] (HDFS-16590) Fix Junit Test Deprecated assertThat

2022-05-23 Thread fanshilun (Jira)
fanshilun created HDFS-16590:


 Summary: Fix Junit Test Deprecated assertThat
 Key: HDFS-16590
 URL: https://issues.apache.org/jira/browse/HDFS-16590
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: fanshilun






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16590) Fix Junit Test Deprecated assertThat

2022-05-24 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17541416#comment-17541416
 ] 

fanshilun commented on HDFS-16590:
--

Hi, [~ste...@apache.org] , Thank you very much!

> Fix Junit Test Deprecated assertThat
> 
>
> Key: HDFS-16590
> URL: https://issues.apache.org/jira/browse/HDFS-16590
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16597) Improve RouterRpcServer#reload With Lambda

2022-05-26 Thread fanshilun (Jira)
fanshilun created HDFS-16597:


 Summary: Improve RouterRpcServer#reload With Lambda
 Key: HDFS-16597
 URL: https://issues.apache.org/jira/browse/HDFS-16597
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Affects Versions: 3.4.0
Reporter: fanshilun
Assignee: fanshilun






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16597) Improve RouterRpcServer#reload With Lambda

2022-05-26 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16597:
-
Description: 
When reading the code, I found that RouterRpcServer#reload uses the following 
method to submit threads

 RouterRpcServer#reload
{code:java}
public ListenableFuture reload(
        final DatanodeReportType type, DatanodeInfo[] oldValue)
        throws Exception {
      return executorService.submit(new Callable() {
        @Override
        public DatanodeInfo[] call() throws Exception {
          return load(type);
        }
      });
    } {code}
This place is better to use lambda way

> Improve RouterRpcServer#reload With Lambda
> --
>
> Key: HDFS-16597
> URL: https://issues.apache.org/jira/browse/HDFS-16597
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>
> When reading the code, I found that RouterRpcServer#reload uses the following 
> method to submit threads
>  RouterRpcServer#reload
> {code:java}
> public ListenableFuture reload(
>         final DatanodeReportType type, DatanodeInfo[] oldValue)
>         throws Exception {
>       return executorService.submit(new Callable() {
>         @Override
>         public DatanodeInfo[] call() throws Exception {
>           return load(type);
>         }
>       });
>     } {code}
> This place is better to use lambda way



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16597) Improve RouterRpcServer#reload With Lambda

2022-05-26 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16597:
-
Issue Type: Improvement  (was: Bug)

> Improve RouterRpcServer#reload With Lambda
> --
>
> Key: HDFS-16597
> URL: https://issues.apache.org/jira/browse/HDFS-16597
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When reading the code, I found that RouterRpcServer#reload uses the following 
> method to submit threads
>  RouterRpcServer#reload
> {code:java}
> public ListenableFuture reload(
>         final DatanodeReportType type, DatanodeInfo[] oldValue)
>         throws Exception {
>       return executorService.submit(new Callable() {
>         @Override
>         public DatanodeInfo[] call() throws Exception {
>           return load(type);
>         }
>       });
>     } {code}
> This place is better to use lambda way



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16599) Fix typo in RouterRpcClient

2022-05-27 Thread fanshilun (Jira)
fanshilun created HDFS-16599:


 Summary: Fix typo in RouterRpcClient
 Key: HDFS-16599
 URL: https://issues.apache.org/jira/browse/HDFS-16599
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.4.0
Reporter: fanshilun
Assignee: fanshilun






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16597) RBF: Improve RouterRpcServer#reload With Lambda

2022-05-27 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17543409#comment-17543409
 ] 

fanshilun commented on HDFS-16597:
--

Hi, [~ayushtkn] [~elgoiri] Thank you very much!

> RBF: Improve RouterRpcServer#reload With Lambda
> ---
>
> Key: HDFS-16597
> URL: https://issues.apache.org/jira/browse/HDFS-16597
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When reading the code, I found that RouterRpcServer#reload uses the following 
> method to submit threads
>  RouterRpcServer#reload
> {code:java}
> public ListenableFuture reload(
>         final DatanodeReportType type, DatanodeInfo[] oldValue)
>         throws Exception {
>       return executorService.submit(new Callable() {
>         @Override
>         public DatanodeInfo[] call() throws Exception {
>           return load(type);
>         }
>       });
>     } {code}
> This place is better to use lambda way



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16597) RBF: Improve RouterRpcServer#reload With Lambda

2022-05-27 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun resolved HDFS-16597.
--
Resolution: Fixed

> RBF: Improve RouterRpcServer#reload With Lambda
> ---
>
> Key: HDFS-16597
> URL: https://issues.apache.org/jira/browse/HDFS-16597
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When reading the code, I found that RouterRpcServer#reload uses the following 
> method to submit threads
>  RouterRpcServer#reload
> {code:java}
> public ListenableFuture reload(
>         final DatanodeReportType type, DatanodeInfo[] oldValue)
>         throws Exception {
>       return executorService.submit(new Callable() {
>         @Override
>         public DatanodeInfo[] call() throws Exception {
>           return load(type);
>         }
>       });
>     } {code}
> This place is better to use lambda way



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16599) Fix typo in RouterRpcClient

2022-05-27 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16599 started by fanshilun.

> Fix typo in RouterRpcClient
> ---
>
> Key: HDFS-16599
> URL: https://issues.apache.org/jira/browse/HDFS-16599
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16599) Fix typo in hadoop-hdfs-rbf modle

2022-05-27 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16599:
-
Summary: Fix typo in hadoop-hdfs-rbf modle  (was: Fix typo in 
RouterRpcClient)

> Fix typo in hadoop-hdfs-rbf modle
> -
>
> Key: HDFS-16599
> URL: https://issues.apache.org/jira/browse/HDFS-16599
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15646) Track failing tests in HDFS

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17543477#comment-17543477
 ] 

fanshilun commented on HDFS-15646:
--

For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

 

 

 

> Track failing tests in HDFS
> ---
>
> Key: HDFS-15646
> URL: https://issues.apache.org/jira/browse/HDFS-15646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ahmed Hussein
>Priority: Blocker
>
> There are several Units that are consistently failing on Yetus for a log 
> period of time.
>  The list keeps growing and it is driving the repository into unstable 
> status. Qbt  reports more than *40 failing unit tests* on average.
> Personally, over the last week, with every submitted patch, I have to spend a 
> considerable time looking at the same stack trace to double check whether or 
> not the patch contributes to those failures.
> I found out that the majority of those tests were failing for quite sometime 
> but +no Jiras were filed+.
> The main problem of those consistent failures is that they have side effect 
> on the runtime of the other Junits by sucking up resources such as memory and 
> ports.
> {{StripedFile}} and {{EC}} tests in particular are 100% show-ups in the list 
> of bad tests.
>  I looked at those tests and they certainly need some improvements (i.e., 
> HDFS-15459). Is any one interested in those test cases? Can we just turn them 
> off?
> I like to give some heads-up that we need some more collaboration to enforce 
> the stability of the code set.
>  * For all developers, please, {color:#ff}file a Jira once you see a 
> failing test whether it is unrelated to your patch or not{color}. This gives 
> heads-up to other developers about the potential failures. Please do not stop 
> at commenting on your patch "_+this is unrelated to my work+_".
>  * Volunteer to dedicate more time on fixing flaky tests.
>  * Periodically, make sure that the list of failing tests does not exceed a 
> certain number of tests. We have Qbt reports to monitor that, but there is no 
> follow up on its status.
>  * We should consider aggressive strategies such as blocking any merges until 
> the code is brought back to stability.
>  * We need a clear and well-defined process to address Yetus issues: 
> configuration, investigating running out of memory, slowness..etc.
>  * Turn-off the Junits within the modules that are not being actively used in 
> the community (i.e., EC, stripedFiles, or..etc.). 
>  
> CC: [~aajisaka], [~elgoiri], [~kihwal], [~daryn], [~weichiu]
> Do you guys have any thoughts on the current status of the HDFS ?
>  
> +The following list is a quick list of failing Junits from Qbt reports:+
>  
> !https://ci-hadoop.apache.org/static/0ead8630/images/16x16/document_add.png!  
> [org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSProviderCaching|https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/testReport/org.apache.hadoop.crypto.key.kms.server/TestKMS/testKMSProviderCaching/]1.5
>  
> sec[1|https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/299/]
> !https://ci-hadoop.apache.org/static/0ead8630/images/16x16/document_add.png!  
>

[jira] [Comment Edited] (HDFS-15646) Track failing tests in HDFS

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17543477#comment-17543477
 ] 

fanshilun edited comment on HDFS-15646 at 5/28/22 2:39 PM:
---

For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|[https://github.com/apache/hadoop/pull/4349|https://github.com/apache/hadoop/pull/4339]]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

 

 

 


was (Author: slfan1989):
For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

 

 

 

> Track failing tests in HDFS
> ---
>
> Key: HDFS-15646
> URL: https://issues.apache.org/jira/browse/HDFS-15646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ahmed Hussein
>Priority: Blocker
>
> There are several Units that are consistently failing on Yetus for a log 
> period of time.
>  The list keeps growing and it is driving the repository into unstable 
> status. Qbt  reports more than *40 failing unit tests* on average.
> Personally, over the last week, with every submitted patch, I have to spend a 
> considerable time looking at the same stack trace to double check whether or 
> not the patch contributes to those failures.
> I found out that the majority of those tests were failing for quite sometime 
> but +no Jiras were filed+.
> The main problem of those consistent failures is that they have side effect 
> on the runtime of the other Junits by sucking up resources such as memory and 
> ports.
> {{StripedFile}} and {{EC}} tests in particular are 100% show-ups in the list 
> of bad tests.
>  I looked at those tests and they certainly need some improvements (i.e., 
> HDFS-15459). Is any one interested in those test c

[jira] [Comment Edited] (HDFS-15646) Track failing tests in HDFS

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17543477#comment-17543477
 ] 

fanshilun edited comment on HDFS-15646 at 5/28/22 2:40 PM:
---

For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349| https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

 

 

 


was (Author: slfan1989):
For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|[https://github.com/apache/hadoop/pull/4349|https://github.com/apache/hadoop/pull/4339]]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

 

 

 

> Track failing tests in HDFS
> ---
>
> Key: HDFS-15646
> URL: https://issues.apache.org/jira/browse/HDFS-15646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ahmed Hussein
>Priority: Blocker
>
> There are several Units that are consistently failing on Yetus for a log 
> period of time.
>  The list keeps growing and it is driving the repository into unstable 
> status. Qbt  reports more than *40 failing unit tests* on average.
> Personally, over the last week, with every submitted patch, I have to spend a 
> considerable time looking at the same stack trace to double check whether or 
> not the patch contributes to those failures.
> I found out that the majority of those tests were failing for quite sometime 
> but +no Jiras were filed+.
> The main problem of those consistent failures is that they have side effect 
> on the runtime of the other Junits by sucking up resources such as memory and 
> ports.
> {{StripedFile}} and {{EC}} tests in particular are 100% show-ups in the list 
> of bad tests.
>  I looked at those tests and they certainly need some improvements (i.e., 
> HDFS-15459). Is any one interested in those test 

[jira] [Comment Edited] (HDFS-15646) Track failing tests in HDFS

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17543477#comment-17543477
 ] 

fanshilun edited comment on HDFS-15646 at 5/28/22 2:42 PM:
---

For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.
{panel:title=我的标题}
Caused by: org.apache.maven.surefire.booter.SurefireBooterForkException: The 
forked VM terminated without properly saying goodbye. VM crash or System.exit 
called? Command was /bin/sh -c cd 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs
 && {color:#FF}/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 
-Xmx2048m{color} -XX:+HeapDumpOnOutOfMemoryError 
-DminiClusterDedicatedDirs=true -jar 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter3248290060089244263.jar
 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
 2022-05-27T03-04-44_807-jvmRun2 surefire7444566443427356222tmp 
surefire_6685537493283452808462tmp Error occurred in starting fork, check 
output in log Process Exit Code: 1 Crashed tests:{panel}
 

 

 


was (Author: slfan1989):
For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349| https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

 

 

 

> Track failing tests in HDFS
> ---
>
> Key: HDFS-15646
> URL: https://issues.apache.org/jira/browse/HDFS-15646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ahmed Hussein
>Priority: Blocker
>
> There are several

[jira] [Comment Edited] (HDFS-15646) Track failing tests in HDFS

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17543477#comment-17543477
 ] 

fanshilun edited comment on HDFS-15646 at 5/28/22 2:46 PM:
---

For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

Is it possible that the compiled memory is too small, can you increase the 
memory used by some compilations?
{panel:title=patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt}

Caused by: org.apache.maven.surefire.booter.SurefireBooterForkException: The 
forked VM terminated without properly saying goodbye. VM crash or System.exit 
called? Command was /bin/sh -c cd 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs
 && {color:#ff}/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 
-Xmx2048m{color} -XX:+HeapDumpOnOutOfMemoryError 
-DminiClusterDedicatedDirs=true -jar 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter3248290060089244263.jar
 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
 2022-05-27T03-04-44_807-jvmRun2 surefire7444566443427356222tmp 
surefire_6685537493283452808462tmp Error occurred in starting fork, check 
output in log Process Exit Code: 1 Crashed tests:
{panel}
[~elgoiri] [~aajisaka] [~ayushtkn] [~stev...@iseran.com] I hope you can help 
solve related problems or give some suggestions

 

 


was (Author: slfan1989):
For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.
{panel:title=我的标题}
Caused by: org.apache.maven.surefire.booter.SurefireBooterFo

[jira] [Comment Edited] (HDFS-15646) Track failing tests in HDFS

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17543477#comment-17543477
 ] 

fanshilun edited comment on HDFS-15646 at 5/29/22 1:04 AM:
---

For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[https://github.com/apache/hadoop/pull/4339]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

Is it possible that the compiled memory is too small, can you increase the 
memory used by some compilations?
{panel:title=patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt}
Caused by: org.apache.maven.surefire.booter.SurefireBooterForkException: The 
forked VM terminated without properly saying goodbye. VM crash or System.exit 
called? Command was /bin/sh -c cd 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs
 && {color:#ff}/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 
-Xmx2048m{color} -XX:+HeapDumpOnOutOfMemoryError 
-DminiClusterDedicatedDirs=true -jar 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter3248290060089244263.jar
 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
 2022-05-27T03-04-44_807-jvmRun2 surefire7444566443427356222tmp 
surefire_6685537493283452808462tmp Error occurred in starting fork, check 
output in log Process Exit Code: 1 Crashed tests:
{panel}
[~elgoiri] [~aajisaka] [~ayushtkn] [~stev...@iseran.com] I hope you can help 
solve related problems or give some suggestions

 

 


was (Author: slfan1989):
For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[pr-4339|[https://github.com/apache/hadoop/pull/4339]]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

Is it possible that the compiled memory is too small, can you increase the 
memory used b

[jira] [Comment Edited] (HDFS-15646) Track failing tests in HDFS

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17543477#comment-17543477
 ] 

fanshilun edited comment on HDFS-15646 at 5/29/22 1:05 AM:
---

For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[https://github.com/apache/hadoop/pull/4339]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the compile environment.

Is it possible that the compiled memory is too small, can you increase the 
memory used by some compilations?
{panel:title=patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt}
Caused by: org.apache.maven.surefire.booter.SurefireBooterForkException: The 
forked VM terminated without properly saying goodbye. VM crash or System.exit 
called? Command was /bin/sh -c cd 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs
 && {color:#ff}/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 
-Xmx2048m{color} -XX:+HeapDumpOnOutOfMemoryError 
-DminiClusterDedicatedDirs=true -jar 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter3248290060089244263.jar
 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4349/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/target/surefire
 2022-05-27T03-04-44_807-jvmRun2 surefire7444566443427356222tmp 
surefire_6685537493283452808462tmp Error occurred in starting fork, check 
output in log Process Exit Code: 1 Crashed tests:
{panel}
 

 

 


was (Author: slfan1989):
For Junit Test failure, it is a big trouble for developers, like the following 
example:

[pr-4349|https://github.com/apache/hadoop/pull/4349]
The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Failed.
{code:java}
 Error DetailsSome writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 41
  done: false
, Circular Writer:
  directory: /test-1
  target length: 50
  current item: 39
  done: false
, Circular Writer:
  directory: /test-2
  target length: 50
  current item: 39
  done: false
] expected:<0> but was:<3> {code}
But I was surprised to find.

[https://github.com/apache/hadoop/pull/4339]

The test report shows that 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites
 Success.All Tests
|[Test 
name|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Duration|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|[Status|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/#]|
|[testCircularLinkedListWrites|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4339/4/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestSeveralNameNodes/testCircularLinkedListWrites]|1
 min 48 sec|Passed|

I now suspect that the junit test failure caused by the docker environment.

Is it possible that the compiled memory is too small, can you increase the 
memory used by some compilations?
{panel:title=patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt}
Caused by: org.apache.maven.surefire.booter.Surefir

[jira] [Commented] (HDFS-13245) RBF: State store DBMS implementation

2022-05-28 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17543558#comment-17543558
 ] 

fanshilun commented on HDFS-13245:
--

I think this jira can be closed, and related functions have been implemented in 
YARN-3663.

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Baolong Mao
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch, HDFS-13245.010.patch, HDFS-13245.011.patch, 
> HDFS-13245.012.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13245) RBF: State store DBMS implementation

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-13245:
-
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> RBF: State store DBMS implementation
> 
>
> Key: HDFS-13245
> URL: https://issues.apache.org/jira/browse/HDFS-13245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Baolong Mao
>Assignee: Yiran Wu
>Priority: Major
> Attachments: HDFS-13245.001.patch, HDFS-13245.002.patch, 
> HDFS-13245.003.patch, HDFS-13245.004.patch, HDFS-13245.005.patch, 
> HDFS-13245.006.patch, HDFS-13245.007.patch, HDFS-13245.008.patch, 
> HDFS-13245.009.patch, HDFS-13245.010.patch, HDFS-13245.011.patch, 
> HDFS-13245.012.patch
>
>
> Add a DBMS implementation for the State Store.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16603) Improve Datanode HttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)
fanshilun created HDFS-16603:


 Summary: Improve Datanode HttpServer With Netty recommended method
 Key: HDFS-16603
 URL: https://issues.apache.org/jira/browse/HDFS-16603
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: fanshilun
Assignee: fanshilun
 Fix For: 3.4.0






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16603) Improve Datanode HttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16603 started by fanshilun.

> Improve Datanode HttpServer With Netty recommended method
> -
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16603) Improve DatanodeHttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16603:
-
Summary: Improve DatanodeHttpServer With Netty recommended method  (was: 
Improve Datanode HttpServer With Netty recommended method)

> Improve DatanodeHttpServer With Netty recommended method
> 
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>
> When reading the code, I found that some usage methods are outdated due to 
> the upgrade of netty components.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16603) Improve Datanode HttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16603:
-
Description: When reading the code, I found that some usage methods are 
outdated due to the upgrade of netty components.

> Improve Datanode HttpServer With Netty recommended method
> -
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>
> When reading the code, I found that some usage methods are outdated due to 
> the upgrade of netty components.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16603) Improve DatanodeHttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16603:
-
Description: 
When reading the code, I found that some usage methods are outdated due to the 
upgrade of netty components.

{color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
{code:java}
@Deprecated
public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
Deprecated. Use WRITE_BUFFER_WATER_MARK

@Deprecated
public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
Deprecated. Use WRITE_BUFFER_WATER_MARK
-

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));

{code}
*2.Duplicate code* 
{code:java}
ChannelFuture f = httpServer.bind(infoAddr);
      try {
        f.syncUninterruptibly();
      } catch (Throwable e) {
        if (e instanceof BindException) {
          throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
              infoAddr.getPort(), (SocketException) e);
        } else {
          throw e;
        }
      }
      httpAddress = (InetSocketAddress) f.channel().localAddress(); {code}

  was:
When reading the code, I found that some usage methods are outdated due to the 
upgrade of netty components.

{color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}

 
{code:java}
@Deprecated
public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
Deprecated. Use WRITE_BUFFER_WATER_MARK

@Deprecated
public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
Deprecated. Use WRITE_BUFFER_WATER_MARK
-

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));

{code}
 

*2.Duplicate code* 
{code:java}
ChannelFuture f = httpServer.bind(infoAddr);
      try {
        f.syncUninterruptibly();
      } catch (Throwable e) {
        if (e instanceof BindException) {
          throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
              infoAddr.getPort(), (SocketException) e);
        } else {
          throw e;
        }
      }
      httpAddress = (InetSocketAddress) f.channel().localAddress(); {code}


> Improve DatanodeHttpServer With Netty recommended method
> 
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>
> When reading the code, I found that some usage methods are outdated due to 
> the upgrade of netty components.
> {color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
> {code:java}
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
> valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
> valueOf("WRITE_BUFFER_LOW_WATER_MARK");
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> -
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));
> {code}
> *2.Duplicate code* 
> {code:java}
> ChannelFuture f = httpServer.bind(infoAddr);
>       try {
>         f.syncUninterruptibly();
>       } catch (Throwable e) {
>         if (e instanceof BindException) {
>           throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
>               infoAddr.getPort(), (SocketException) e);
>         } else {
>           throw e;
>         }
>       }
>       httpAddress = (InetSocketAd

[jira] [Updated] (HDFS-16603) Improve DatanodeHttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16603:
-
Description: 
When reading the code, I found that some usage methods are outdated due to the 
upgrade of netty components.

{color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}

 
{code:java}
@Deprecated
public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
Deprecated. Use WRITE_BUFFER_WATER_MARK

@Deprecated
public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
Deprecated. Use WRITE_BUFFER_WATER_MARK
-

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));

{code}
 

*2.Duplicate code* 
{code:java}
ChannelFuture f = httpServer.bind(infoAddr);
      try {
        f.syncUninterruptibly();
      } catch (Throwable e) {
        if (e instanceof BindException) {
          throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
              infoAddr.getPort(), (SocketException) e);
        } else {
          throw e;
        }
      }
      httpAddress = (InetSocketAddress) f.channel().localAddress(); {code}

  was:When reading the code, I found that some usage methods are outdated due 
to the upgrade of netty components.


> Improve DatanodeHttpServer With Netty recommended method
> 
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>
> When reading the code, I found that some usage methods are outdated due to 
> the upgrade of netty components.
> {color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
>  
> {code:java}
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
> valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
> valueOf("WRITE_BUFFER_LOW_WATER_MARK");
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> -
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));
> {code}
>  
> *2.Duplicate code* 
> {code:java}
> ChannelFuture f = httpServer.bind(infoAddr);
>       try {
>         f.syncUninterruptibly();
>       } catch (Throwable e) {
>         if (e instanceof BindException) {
>           throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
>               infoAddr.getPort(), (SocketException) e);
>         } else {
>           throw e;
>         }
>       }
>       httpAddress = (InetSocketAddress) f.channel().localAddress(); {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16603) Improve DatanodeHttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16603:
-
Description: 
When reading the code, I found that some usage methods are outdated due to the 
upgrade of netty components.

{color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
{code:java}
@Deprecated
public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
Deprecated. Use WRITE_BUFFER_WATER_MARK

@Deprecated
public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
Deprecated. Use WRITE_BUFFER_WATER_MARK
-

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));

{code}
*2.Duplicate code* 
{code:java}
ChannelFuture f = httpServer.bind(infoAddr);
try {
 f.syncUninterruptibly();
} catch (Throwable e) {
  if (e instanceof BindException) {
   throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
   infoAddr.getPort(), (SocketException) e);
 } else {
   throw e;
 }
}
httpAddress = (InetSocketAddress) f.channel().localAddress();
LOG.info("Listening HTTP traffic on " + httpAddress);{code}
*3.io.netty.bootstrap.ChannelFactory Deprecated*
{code:java}
/** @deprecated */
@Deprecated
public interface ChannelFactory {
    T newChannel();
}{code}

  was:
When reading the code, I found that some usage methods are outdated due to the 
upgrade of netty components.

{color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
{code:java}
@Deprecated
public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
Deprecated. Use WRITE_BUFFER_WATER_MARK

@Deprecated
public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
Deprecated. Use WRITE_BUFFER_WATER_MARK
-

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));

{code}
*2.Duplicate code* 
{code:java}
ChannelFuture f = httpServer.bind(infoAddr);
      try {
        f.syncUninterruptibly();
      } catch (Throwable e) {
        if (e instanceof BindException) {
          throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
              infoAddr.getPort(), (SocketException) e);
        } else {
          throw e;
        }
      }
      httpAddress = (InetSocketAddress) f.channel().localAddress(); {code}


> Improve DatanodeHttpServer With Netty recommended method
> 
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>
> When reading the code, I found that some usage methods are outdated due to 
> the upgrade of netty components.
> {color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
> {code:java}
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
> valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
> valueOf("WRITE_BUFFER_LOW_WATER_MARK");
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> -
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));
> {code}
> *2.Duplicate code* 
> {code:java}
> ChannelFuture f = httpServer.bind(infoAddr);
> try {
>  f.syncUninterruptibly();
> } catch (Throwable e) {
>   if (e instanceof BindException) {
>    throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
>    infoAddr.getPort(), (SocketException) e);
>  } else {
>  

[jira] [Updated] (HDFS-16603) Improve DatanodeHttpServer With Netty recommended method

2022-05-28 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16603:
-
Description: 
When reading the code, I found that some usage methods are outdated due to the 
upgrade of netty components.

{color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
{code:java}
@Deprecated
public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
Deprecated. Use WRITE_BUFFER_WATER_MARK

@Deprecated
public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
Deprecated. Use WRITE_BUFFER_WATER_MARK
-

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));

{code}
*2.Duplicate code* 
{code:java}
ChannelFuture f = httpServer.bind(infoAddr);
try {
 f.syncUninterruptibly();
} catch (Throwable e) {
  if (e instanceof BindException) {
   throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
   infoAddr.getPort(), (SocketException) e);
 } else {
   throw e;
 }
}
httpAddress = (InetSocketAddress) f.channel().localAddress();
LOG.info("Listening HTTP traffic on " + httpAddress);{code}
*3.io.netty.bootstrap.ChannelFactory Deprecated*

*use io.netty.channel.ChannelFactory instead.*
{code:java}
/** @deprecated */
@Deprecated
public interface ChannelFactory {
    T newChannel();
}{code}

  was:
When reading the code, I found that some usage methods are outdated due to the 
upgrade of netty components.

{color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
{code:java}
@Deprecated
public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
Deprecated. Use WRITE_BUFFER_WATER_MARK

@Deprecated
public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
valueOf("WRITE_BUFFER_LOW_WATER_MARK");
Deprecated. Use WRITE_BUFFER_WATER_MARK
-

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));

this.httpServer.childOption(
          ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
          conf.getInt(
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
              DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));

{code}
*2.Duplicate code* 
{code:java}
ChannelFuture f = httpServer.bind(infoAddr);
try {
 f.syncUninterruptibly();
} catch (Throwable e) {
  if (e instanceof BindException) {
   throw NetUtils.wrapException(null, 0, infoAddr.getHostName(),
   infoAddr.getPort(), (SocketException) e);
 } else {
   throw e;
 }
}
httpAddress = (InetSocketAddress) f.channel().localAddress();
LOG.info("Listening HTTP traffic on " + httpAddress);{code}
*3.io.netty.bootstrap.ChannelFactory Deprecated*
{code:java}
/** @deprecated */
@Deprecated
public interface ChannelFactory {
    T newChannel();
}{code}


> Improve DatanodeHttpServer With Netty recommended method
> 
>
> Key: HDFS-16603
> URL: https://issues.apache.org/jira/browse/HDFS-16603
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>
> When reading the code, I found that some usage methods are outdated due to 
> the upgrade of netty components.
> {color:#172b4d}*1.DatanodeHttpServer#Constructor*{color}
> {code:java}
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_HIGH_WATER_MARK = 
> valueOf("WRITE_BUFFER_HIGH_WATER_MARK"); 
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> @Deprecated
> public static final ChannelOption WRITE_BUFFER_LOW_WATER_MARK = 
> valueOf("WRITE_BUFFER_LOW_WATER_MARK");
> Deprecated. Use WRITE_BUFFER_WATER_MARK
> -
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_HIGH_WATERMARK_DEFAULT));
> this.httpServer.childOption(
>           ChannelOption.WRITE_BUFFER_LOW_WATER_MARK,
>           conf.getInt(
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK,
>               DFSConfigKeys.DFS_WEBHDFS_NETTY_LOW_WATERMARK_DEFAULT));
> {code}
> *2.Duplicate code* 
> {code:java}
> ChannelFuture f = httpServer.bind(infoAddr);
> try {
>  f.syncUninterruptibly();
> } catch (Thr

[jira] [Created] (HDFS-16605) Improve Code With Lambda in hadoop-hdfs-rbf modle

2022-05-29 Thread fanshilun (Jira)
fanshilun created HDFS-16605:


 Summary: Improve Code With Lambda in hadoop-hdfs-rbf modle
 Key: HDFS-16605
 URL: https://issues.apache.org/jira/browse/HDFS-16605
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Affects Versions: 3.4.0
Reporter: fanshilun
Assignee: fanshilun






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16605) Improve Code With Lambda in hadoop-hdfs-rbf moudle

2022-05-29 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16605:
-
Summary: Improve Code With Lambda in hadoop-hdfs-rbf moudle  (was: Improve 
Code With Lambda in hadoop-hdfs-rbf modle)

> Improve Code With Lambda in hadoop-hdfs-rbf moudle
> --
>
> Key: HDFS-16605
> URL: https://issues.apache.org/jira/browse/HDFS-16605
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16609) Fix some Junit Tests that often report timeouts

2022-05-30 Thread fanshilun (Jira)
fanshilun created HDFS-16609:


 Summary: Fix some Junit Tests that often report timeouts
 Key: HDFS-16609
 URL: https://issues.apache.org/jira/browse/HDFS-16609
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: fanshilun
Assignee: fanshilun
 Fix For: 3.4.0






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16609) Fix Flakes Junit Tests that often report timeouts

2022-05-30 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16609:
-
Summary: Fix Flakes Junit Tests that often report timeouts  (was: Fix some 
Junit Tests that often report timeouts)

> Fix Flakes Junit Tests that often report timeouts
> -
>
> Key: HDFS-16609
> URL: https://issues.apache.org/jira/browse/HDFS-16609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Fix For: 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16609) Fix Flakes Junit Tests that often report timeouts

2022-05-30 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16609:
-
Description: When I was dealing with HDFS-16590 JIRA, Junit Tests often 
reported errors, I found that one type of problem is TimeOut problem, these 
problems can be avoided by adjusting TimeOut time.

> Fix Flakes Junit Tests that often report timeouts
> -
>
> Key: HDFS-16609
> URL: https://issues.apache.org/jira/browse/HDFS-16609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Fix For: 3.4.0
>
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors, I 
> found that one type of problem is TimeOut problem, these problems can be 
> avoided by adjusting TimeOut time.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16609) Fix Flakes Junit Tests that often report timeouts

2022-05-30 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16609 started by fanshilun.

> Fix Flakes Junit Tests that often report timeouts
> -
>
> Key: HDFS-16609
> URL: https://issues.apache.org/jira/browse/HDFS-16609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Fix For: 3.4.0
>
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors, I 
> found that one type of problem is TimeOut problem, these problems can be 
> avoided by adjusting TimeOut time.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16609) Fix Flakes Junit Tests that often report timeouts

2022-05-30 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16609:
-
Description: 
When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors, I 
found that one type of problem is TimeOut problem, these problems can be 
avoided by adjusting TimeOut time.

The modified method is as follows:

 

 

  was:When I was dealing with HDFS-16590 JIRA, Junit Tests often reported 
errors, I found that one type of problem is TimeOut problem, these problems can 
be avoided by adjusting TimeOut time.


> Fix Flakes Junit Tests that often report timeouts
> -
>
> Key: HDFS-16609
> URL: https://issues.apache.org/jira/browse/HDFS-16609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Fix For: 3.4.0
>
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors, I 
> found that one type of problem is TimeOut problem, these problems can be 
> avoided by adjusting TimeOut time.
> The modified method is as follows:
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16609) Fix Flakes Junit Tests that often report timeouts

2022-05-30 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16609:
-
Description: 
When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors, I 
found that one type of problem is TimeOut problem, these problems can be 
avoided by adjusting TimeOut time.

The modified method is as follows:

org.apache.hadoop.hdfs.TestFileCreation#testServerDefaultsWithMinimalCaching
{code:java}
[ERROR] 
testServerDefaultsWithMinimalCaching(org.apache.hadoop.hdfs.TestFileCreation)  
Time elapsed: 7.136 s  <<< ERROR!
java.util.concurrent.TimeoutException: 
Timed out waiting for condition. 
Thread diagnostics: {code}
 

  was:
When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors, I 
found that one type of problem is TimeOut problem, these problems can be 
avoided by adjusting TimeOut time.

The modified method is as follows:

 

 


> Fix Flakes Junit Tests that often report timeouts
> -
>
> Key: HDFS-16609
> URL: https://issues.apache.org/jira/browse/HDFS-16609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Fix For: 3.4.0
>
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors, I 
> found that one type of problem is TimeOut problem, these problems can be 
> avoided by adjusting TimeOut time.
> The modified method is as follows:
> org.apache.hadoop.hdfs.TestFileCreation#testServerDefaultsWithMinimalCaching
> {code:java}
> [ERROR] 
> testServerDefaultsWithMinimalCaching(org.apache.hadoop.hdfs.TestFileCreation) 
>  Time elapsed: 7.136 s  <<< ERROR!
> java.util.concurrent.TimeoutException: 
> Timed out waiting for condition. 
> Thread diagnostics: {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16609) Fix Flakes Junit Tests that often report timeouts

2022-05-30 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16609:
-
Description: 
When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors, I 
found that one type of problem is TimeOut problem, these problems can be 
avoided by adjusting TimeOut time.

The modified method is as follows:

1.org.apache.hadoop.hdfs.TestFileCreation#testServerDefaultsWithMinimalCaching
{code:java}
[ERROR] 
testServerDefaultsWithMinimalCaching(org.apache.hadoop.hdfs.TestFileCreation)  
Time elapsed: 7.136 s  <<< ERROR!
java.util.concurrent.TimeoutException: 
Timed out waiting for condition. 
Thread diagnostics: 

[WARNING] 
org.apache.hadoop.hdfs.TestFileCreation.testServerDefaultsWithMinimalCaching(org.apache.hadoop.hdfs.TestFileCreation)
[ERROR]   Run 1: TestFileCreation.testServerDefaultsWithMinimalCaching:277 
Timeout Timed out ...
[INFO]   Run 2: PASS{code}
2.org.apache.hadoop.hdfs.TestDFSShell#testFilePermissions
{code:java}
[ERROR] testFilePermissions(org.apache.hadoop.hdfs.TestDFSShell)  Time elapsed: 
30.022 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 3 
milliseconds
at java.lang.Thread.dumpThreads(Native Method)
at java.lang.Thread.getStackTrace(Thread.java:1549)
at 
org.junit.internal.runners.statements.FailOnTimeout.createTimeoutException(FailOnTimeout.java:182)
at 
org.junit.internal.runners.statements.FailOnTimeout.getResult(FailOnTimeout.java:177)
at 
org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:128)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)

[WARNING] 
org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions(org.apache.hadoop.hdfs.TestDFSShell)
[ERROR]   Run 1: TestDFSShell.testFilePermissions TestTimedOut test timed out 
after 3 mil...
[INFO]   Run 2: PASS {code}
3.org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier#testSPSWhenFileHasExcessRedundancyBlocks
{code:java}
[ERROR] 
testSPSWhenFileHasExcessRedundancyBlocks(org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier)
  Time elapsed: 67.904 s  <<< ERROR!
java.util.concurrent.TimeoutException: 
Timed out waiting for condition. 

[WARNING] 
org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier.testSPSWhenFileHasExcessRedundancyBlocks(org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier)
[ERROR]   Run 1: 
TestExternalStoragePolicySatisfier.testSPSWhenFileHasExcessRedundancyBlocks:1379
 Timeout
[ERROR]   Run 2: 
TestExternalStoragePolicySatisfier.testSPSWhenFileHasExcessRedundancyBlocks:1379
 Timeout
[INFO]   Run 3: PASS {code}

  was:
When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors, I 
found that one type of problem is TimeOut problem, these problems can be 
avoided by adjusting TimeOut time.

The modified method is as follows:

org.apache.hadoop.hdfs.TestFileCreation#testServerDefaultsWithMinimalCaching
{code:java}
[ERROR] 
testServerDefaultsWithMinimalCaching(org.apache.hadoop.hdfs.TestFileCreation)  
Time elapsed: 7.136 s  <<< ERROR!
java.util.concurrent.TimeoutException: 
Timed out waiting for condition. 
Thread diagnostics: {code}
 


> Fix Flakes Junit Tests that often report timeouts
> -
>
> Key: HDFS-16609
> URL: https://issues.apache.org/jira/browse/HDFS-16609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Fix For: 3.4.0
>
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors, I 
> found that one type of problem is TimeOut problem, these problems can be 
> avoided by adjusting TimeOut time.
> The modified method is as follows:
> 1.org.apache.hadoop.hdfs.TestFileCreation#testServerDefaultsWithMinimalCaching
> {code:java}
> [ERROR] 
> testServerDefaultsWithMinimalCaching(org.apache.hadoop.hdfs.TestFileCreation) 
>  Time elapsed: 7.136 s  <<< ERROR!
> java.util.concurrent.TimeoutException: 
> Timed out waiting for condition. 
> Thread diagnostics: 
> [WARNING] 
> org.apache.hadoop.hdfs.TestFileCreation.testServerDefaultsWithMinimalCaching(org.apache.hadoop.hdfs.TestFileCreation)
> [ERROR]   Run 1: TestFileCreation.testServerDefaultsWithMinimalCaching:277 
> Timeout Timed out ...
> [INFO]   Run 2: PASS{code}
> 2.org.apache.hadoop.hdfs.TestDFSShell#testFilePermissions
> {code:java}
> [ERROR] testFilePermissions(org.apache.hadoop.hdfs.TestDFSShell)  Time 
> elapsed: 30.022 s  <<< ERROR!
>

[jira] [Created] (HDFS-16611) impove TestSeveralNameNodes#testCircularLinkedListWrites Params

2022-05-31 Thread fanshilun (Jira)
fanshilun created HDFS-16611:


 Summary: impove TestSeveralNameNodes#testCircularLinkedListWrites 
Params
 Key: HDFS-16611
 URL: https://issues.apache.org/jira/browse/HDFS-16611
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.4.0
Reporter: fanshilun
Assignee: fanshilun






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16611) impove TestSeveralNameNodes#testCircularLinkedListWrites Params

2022-05-31 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16611:
-
Description: 
When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  I 
found that the following error messages often appear

org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#

testCircularLinkedListWrites
{code:java}
1st run
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 114.252 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 43
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 42
 done: false
] expected:<0> but was:<3>

2nd run

[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 110.349 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 50
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 49
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 49
 done: false
] expected:<0> but was:<3>
at org.junit.Assert.fail(Assert.java:89)



3rd run
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 109.364 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 46
 done: false
] expected:<0> but was:<3>
at org.junit.Assert.fail(Assert.java:89)


{code}
 

 

> impove TestSeveralNameNodes#testCircularLinkedListWrites Params
> ---
>
> Key: HDFS-16611
> URL: https://issues.apache.org/jira/browse/HDFS-16611
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  
> I found that the following error messages often appear
> org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#
> testCircularLinkedListWrites
> {code:java}
> 1st run
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 114.252 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 43
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 42
>done: false
> ] expected:<0> but was:<3>
> 
> 2nd run
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 110.349 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 50
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 49
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 49
>done: false
> ] expected:<0> but was:<3>
>   at org.junit.Assert.fail(Assert.java:89)
> 
> 3rd run
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 109.364 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>director

[jira] [Updated] (HDFS-16611) impove TestSeveralNameNodes#testCircularLinkedListWrites Params

2022-05-31 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16611:
-
Description: 
When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  I 
found that the following error messages often appear

org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#

testCircularLinkedListWrites

1st run
{code:java}
1st run
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 114.252 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 43
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 42
 done: false
] expected:<0> but was:<3>
{code}

2st run
{code:java}
 [ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 110.349 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 50
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 49
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 49
 done: false
] expected:<0> but was:<3>
{code}
 
3rd run
{code:java}
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 109.364 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 46
 done: false
] expected:<0> but was:<3>
{code}

  was:
When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  I 
found that the following error messages often appear

org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#

testCircularLinkedListWrites
{code:java}
1st run
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 114.252 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 43
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 42
 done: false
] expected:<0> but was:<3>

2nd run

[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 110.349 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 50
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 49
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 49
 done: false
] expected:<0> but was:<3>
at org.junit.Assert.fail(Assert.java:89)



3rd run
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 109.364 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 46
 done: false
] expected:<0> but was:<3>
at org.junit.Assert.fail(Assert.java:89)


{code}
 

 


> impove TestSeveralNameNodes#testCircularLinkedListWrites Params
> ---
>
> Key: HDFS-16611
> URL: https://issues.apache.org/jira/browse/HDFS-16611
> Project: Hadoop HDFS
>  Issue

[jira] [Updated] (HDFS-16611) impove TestSeveralNameNodes#testCircularLinkedListWrites Params

2022-05-31 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16611:
-
Description: 
When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  I 
found that the following error messages often appear

org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#

testCircularLinkedListWrites
 * 1st run

{code:java}
1st run
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 114.252 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 43
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 42
 done: false
] expected:<0> but was:<3>
{code}
 * 2st run

{code:java}
 [ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 110.349 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 50
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 49
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 49
 done: false
] expected:<0> but was:<3>
{code}
 * 3rd run

{code:java}
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 109.364 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 46
 done: false
] expected:<0> but was:<3>
{code}

  was:
When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  I 
found that the following error messages often appear

org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#

testCircularLinkedListWrites

1st run
{code:java}
1st run
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 114.252 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 43
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 42
 done: false
] expected:<0> but was:<3>
{code}

2st run
{code:java}
 [ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 110.349 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 50
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 49
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 49
 done: false
] expected:<0> but was:<3>
{code}
 
3rd run
{code:java}
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 109.364 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 46
 done: false
] expected:<0> but was:<3>
{code}


> impove TestSeveralNameNodes#testCircularLinkedListWrites Params
> ---
>
> Key: HDFS-16611
> URL: https://issues.apache.org/jira/browse/HDFS-16611
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects

[jira] [Work started] (HDFS-16611) impove TestSeveralNameNodes#testCircularLinkedListWrites Params

2022-05-31 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16611 started by fanshilun.

> impove TestSeveralNameNodes#testCircularLinkedListWrites Params
> ---
>
> Key: HDFS-16611
> URL: https://issues.apache.org/jira/browse/HDFS-16611
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  
> I found that the following error messages often appear
> org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#
> testCircularLinkedListWrites
>  * 1st run
> {code:java}
> 1st run
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 114.252 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 43
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 42
>done: false
> ] expected:<0> but was:<3>
> {code}
>  * 2st run
> {code:java}
>  [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 110.349 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 50
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 49
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 49
>done: false
> ] expected:<0> but was:<3>
> {code}
>  * 3rd run
> {code:java}
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 109.364 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 46
>done: false
> ] expected:<0> but was:<3>
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16611) impove TestSeveralNameNodes#testCircularLinkedListWrites Params

2022-05-31 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16611:
-
Description: 
When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  I 
found that the following error messages often appear

org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#

testCircularLinkedListWrites

This method runs very close to success. It can be found that the current item 
is approximately equal to the target length in 3 runs. I think it can reduce 
the length of LIST_LENGTH and prolong the RUNTIME time, which can effectively 
increase the success rate of this Test.

Reducing LIST_LENGTH does not change the running purpose of Text, and it can 
also test Circular Writes in the case of NN failover.
 * 1st run

{code:java}
1st run
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 114.252 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 43
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 42
 done: false
] expected:<0> but was:<3>
{code}
 * 2st run

{code:java}
 [ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 110.349 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 50
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 49
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 49
 done: false
] expected:<0> but was:<3>
{code}
 * 3rd run

{code:java}
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 109.364 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 46
 done: false
] expected:<0> but was:<3>
{code}

  was:
When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  I 
found that the following error messages often appear

org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#

testCircularLinkedListWrites
 * 1st run

{code:java}
1st run
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 114.252 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 43
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 42
 done: false
] expected:<0> but was:<3>
{code}
 * 2st run

{code:java}
 [ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 110.349 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 50
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 49
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 49
 done: false
] expected:<0> but was:<3>
{code}
 * 3rd run

{code:java}
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 109.364 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 curren

[jira] [Created] (HDFS-16612) impove import * In HDFS Project

2022-05-31 Thread fanshilun (Jira)
fanshilun created HDFS-16612:


 Summary: impove import * In HDFS Project
 Key: HDFS-16612
 URL: https://issues.apache.org/jira/browse/HDFS-16612
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.4.0
Reporter: fanshilun
Assignee: fanshilun






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16612) impove import * In HDFS Project

2022-05-31 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16612 started by fanshilun.

> impove import * In HDFS Project
> ---
>
> Key: HDFS-16612
> URL: https://issues.apache.org/jira/browse/HDFS-16612
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16611) impove TestSeveralNameNodes#testCircularLinkedListWrites Params

2022-05-31 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16611:
-
Description: 
When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  I 
found that the following error messages often appear

org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#

testCircularLinkedListWrites

This method runs very close to success. It can be found that the current item 
is approximately equal to the target length in 3 runs. I think it can reduce 
the length of LIST_LENGTH and prolong the RUNTIME time, which can effectively 
increase the success rate of this Test.

Reducing LIST_LENGTH does not change the running purpose of Test, and it can 
also test Circular Writes in the case of NN failover.
 * 1st run

{code:java}
1st run
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 114.252 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 43
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 42
 done: false
] expected:<0> but was:<3>
{code}
 * 2st run

{code:java}
 [ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 110.349 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 50
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 49
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 49
 done: false
] expected:<0> but was:<3>
{code}
 * 3rd run

{code:java}
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 109.364 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 46
 done: false
] expected:<0> but was:<3>
{code}

  was:
When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  I 
found that the following error messages often appear

org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#

testCircularLinkedListWrites

This method runs very close to success. It can be found that the current item 
is approximately equal to the target length in 3 runs. I think it can reduce 
the length of LIST_LENGTH and prolong the RUNTIME time, which can effectively 
increase the success rate of this Test.

Reducing LIST_LENGTH does not change the running purpose of Text, and it can 
also test Circular Writes in the case of NN failover.
 * 1st run

{code:java}
1st run
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 114.252 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 43
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 47
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 42
 done: false
] expected:<0> but was:<3>
{code}
 * 2st run

{code:java}
 [ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 110.349 s  <<< FAILURE!
java.lang.AssertionError: 
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
 directory: /test-0
 target length: 50
 current item: 50
 done: false
, Circular Writer:
 directory: /test-1
 target length: 50
 current item: 49
 done: false
, Circular Writer:
 directory: /test-2
 target length: 50
 current item: 49
 done: false
] expected:<0> but was:<3>
{code}
 * 3rd run

{code:java}
[ERROR] 
testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
  Time elapsed: 109.364 s  <<< FAILURE!
java.lang.Assertion

[jira] [Resolved] (HDFS-16612) impove import * In HDFS Project

2022-06-01 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun resolved HDFS-16612.
--
Resolution: Not A Problem

> impove import * In HDFS Project
> ---
>
> Key: HDFS-16612
> URL: https://issues.apache.org/jira/browse/HDFS-16612
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16619) impove HttpHeaders.Values And HttpHeaders.Names With recommended Class

2022-06-05 Thread fanshilun (Jira)
fanshilun created HDFS-16619:


 Summary: impove HttpHeaders.Values And HttpHeaders.Names With 
recommended Class
 Key: HDFS-16619
 URL: https://issues.apache.org/jira/browse/HDFS-16619
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.4.0
Reporter: fanshilun
Assignee: fanshilun
 Fix For: 3.4.0






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16619) impove HttpHeaders.Values And HttpHeaders.Names With recommended Class

2022-06-05 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16619:
-
Description: HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use 
HttpHeaderValues ​​and HttpHeaderNames instead.

> impove HttpHeaders.Values And HttpHeaders.Names With recommended Class
> --
>
> Key: HDFS-16619
> URL: https://issues.apache.org/jira/browse/HDFS-16619
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Fix For: 3.4.0
>
>
> HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use 
> HttpHeaderValues ​​and HttpHeaderNames instead.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16611) impove TestSeveralNameNodes#testCircularLinkedListWrites Params

2022-06-06 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun resolved HDFS-16611.
--
Resolution: Won't Fix

> impove TestSeveralNameNodes#testCircularLinkedListWrites Params
> ---
>
> Key: HDFS-16611
> URL: https://issues.apache.org/jira/browse/HDFS-16611
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  
> I found that the following error messages often appear
> org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#
> testCircularLinkedListWrites
> This method runs very close to success. It can be found that the current item 
> is approximately equal to the target length in 3 runs. I think it can reduce 
> the length of LIST_LENGTH and prolong the RUNTIME time, which can effectively 
> increase the success rate of this Test.
> Reducing LIST_LENGTH does not change the running purpose of Test, and it can 
> also test Circular Writes in the case of NN failover.
>  * 1st run
> {code:java}
> 1st run
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 114.252 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 43
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 42
>done: false
> ] expected:<0> but was:<3>
> {code}
>  * 2st run
> {code:java}
>  [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 110.349 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 50
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 49
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 49
>done: false
> ] expected:<0> but was:<3>
> {code}
>  * 3rd run
> {code:java}
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 109.364 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 46
>done: false
> ] expected:<0> but was:<3>
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDFS-16611) impove TestSeveralNameNodes#testCircularLinkedListWrites Params

2022-06-06 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16611 stopped by fanshilun.

> impove TestSeveralNameNodes#testCircularLinkedListWrites Params
> ---
>
> Key: HDFS-16611
> URL: https://issues.apache.org/jira/browse/HDFS-16611
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  
> I found that the following error messages often appear
> org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#
> testCircularLinkedListWrites
> This method runs very close to success. It can be found that the current item 
> is approximately equal to the target length in 3 runs. I think it can reduce 
> the length of LIST_LENGTH and prolong the RUNTIME time, which can effectively 
> increase the success rate of this Test.
> Reducing LIST_LENGTH does not change the running purpose of Test, and it can 
> also test Circular Writes in the case of NN failover.
>  * 1st run
> {code:java}
> 1st run
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 114.252 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 43
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 42
>done: false
> ] expected:<0> but was:<3>
> {code}
>  * 2st run
> {code:java}
>  [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 110.349 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 50
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 49
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 49
>done: false
> ] expected:<0> but was:<3>
> {code}
>  * 3rd run
> {code:java}
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 109.364 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 46
>done: false
> ] expected:<0> but was:<3>
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#org.apache.hadoop.hdfs.tools.TestDFSAdmin ERROR

2022-06-06 Thread fanshilun (Jira)
fanshilun created HDFS-16624:


 Summary: Fix 
org.apache.hadoop.hdfs.tools.TestDFSAdmin#org.apache.hadoop.hdfs.tools.TestDFSAdmin
 ERROR
 Key: HDFS-16624
 URL: https://issues.apache.org/jira/browse/HDFS-16624
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.4.0
Reporter: fanshilun
Assignee: fanshilun






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16624:
-
Summary: Fix 
org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR  (was: 
Fix 
org.apache.hadoop.hdfs.tools.TestDFSAdmin#org.apache.hadoop.hdfs.tools.TestDFSAdmin
 ERROR)

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Attachments: testAllDatanodesReconfig.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16624:
-
Attachment: testAllDatanodesReconfig.png

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Attachments: testAllDatanodesReconfig.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16624:
-
Description: 
HDFS-16619 found an error message during Junit unit testing, as follows:

expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
was:<[ From: "false"]>

After code debugging, it was found that there was an error in the selection 
outs.get(x) of the assertion.

Please refer to the attachment for debugging pictures

!testAllDatanodesReconfig.png!

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Attachments: testAllDatanodesReconfig.png
>
>
> HDFS-16619 found an error message during Junit unit testing, as follows:
> expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
> was:<[ From: "false"]>
> After code debugging, it was found that there was an error in the selection 
> outs.get(x) of the assertion.
> Please refer to the attachment for debugging pictures
> !testAllDatanodesReconfig.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16624:
-
Description: 
HDFS-16619 found an error message during Junit unit testing, as follows:

expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
was:<[ From: "false"]>

After code debugging, it was found that there was an error in the selection 
outs.get(2) of the assertion(1208), index should be equal to 1.

Please refer to the attachment for debugging pictures

!testAllDatanodesReconfig.png!

  was:
HDFS-16619 found an error message during Junit unit testing, as follows:

expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
was:<[ From: "false"]>

After code debugging, it was found that there was an error in the selection 
outs.get(x) of the assertion.

Please refer to the attachment for debugging pictures

!testAllDatanodesReconfig.png!


> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Attachments: testAllDatanodesReconfig.png
>
>
> HDFS-16619 found an error message during Junit unit testing, as follows:
> expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
> was:<[ From: "false"]>
> After code debugging, it was found that there was an error in the selection 
> outs.get(2) of the assertion(1208), index should be equal to 1.
> Please refer to the attachment for debugging pictures
> !testAllDatanodesReconfig.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16563) Namenode WebUI prints sensitve information on Token Expiry

2022-06-06 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun resolved HDFS-16563.
--
Resolution: Resolved

> Namenode WebUI prints sensitve information on Token Expiry
> --
>
> Key: HDFS-16563
> URL: https://issues.apache.org/jira/browse/HDFS-16563
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namanode, security, webhdfs
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-04-27-23-01-16-033.png, 
> image-2022-04-27-23-28-40-568.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Login to Namenode WebUI.
> Wait for token to expire. (Or modify the Token refresh time 
> dfs.namenode.delegation.token.renew/update-interval to lower value)
> Refresh the WebUI after the Token expiry.
> Full token information gets printed in WebUI.
>  
> !image-2022-04-27-23-01-16-033.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16563) Namenode WebUI prints sensitive information on Token Expiry

2022-06-07 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17551345#comment-17551345
 ] 

fanshilun commented on HDFS-16563:
--

[~ste...@apache.org] Sorry, I saw this pr was merged in the git log, I thought 
this was forgotten to close, so I closed it.

> Namenode WebUI prints sensitive information on Token Expiry
> ---
>
> Key: HDFS-16563
> URL: https://issues.apache.org/jira/browse/HDFS-16563
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namanode, security, webhdfs
>Affects Versions: 3.3.3
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
> Attachments: image-2022-04-27-23-01-16-033.png, 
> image-2022-04-27-23-28-40-568.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Login to Namenode WebUI.
> Wait for token to expire. (Or modify the Token refresh time 
> dfs.namenode.delegation.token.renew/update-interval to lower value)
> Refresh the WebUI after the Token expiry.
> Full token information gets printed in WebUI.
>  
> !image-2022-04-27-23-01-16-033.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16619) impove HttpHeaders.Values And HttpHeaders.Names With recommended Class

2022-06-07 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16619 started by fanshilun.

> impove HttpHeaders.Values And HttpHeaders.Names With recommended Class
> --
>
> Key: HDFS-16619
> URL: https://issues.apache.org/jira/browse/HDFS-16619
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use 
> HttpHeaderValues ​​and HttpHeaderNames instead.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16590) Fix Junit Test Deprecated assertThat

2022-06-07 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16590 started by fanshilun.

> Fix Junit Test Deprecated assertThat
> 
>
> Key: HDFS-16590
> URL: https://issues.apache.org/jira/browse/HDFS-16590
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16605) Improve Code With Lambda in hadoop-hdfs-rbf moudle

2022-06-07 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16605 started by fanshilun.

> Improve Code With Lambda in hadoop-hdfs-rbf moudle
> --
>
> Key: HDFS-16605
> URL: https://issues.apache.org/jira/browse/HDFS-16605
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-07 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16624 started by fanshilun.

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Attachments: testAllDatanodesReconfig.png
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> HDFS-16619 found an error message during Junit unit testing, as follows:
> expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
> was:<[ From: "false"]>
> After code debugging, it was found that there was an error in the selection 
> outs.get(2) of the assertion(1208), index should be equal to 1.
> Please refer to the attachment for debugging pictures
> !testAllDatanodesReconfig.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-07 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17551360#comment-17551360
 ] 

fanshilun commented on HDFS-16624:
--

Thanks for the suggestion, I have linked jira.

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Attachments: testAllDatanodesReconfig.png
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> HDFS-16619 found an error message during Junit unit testing, as follows:
> expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
> was:<[ From: "false"]>
> After code debugging, it was found that there was an error in the selection 
> outs.get(2) of the assertion(1208), index should be equal to 1.
> Please refer to the attachment for debugging pictures
> !testAllDatanodesReconfig.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16627) improve BPServiceActor#register Log Add NN Info

2022-06-08 Thread fanshilun (Jira)
fanshilun created HDFS-16627:


 Summary: improve BPServiceActor#register Log Add NN Info
 Key: HDFS-16627
 URL: https://issues.apache.org/jira/browse/HDFS-16627
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.4.0
Reporter: fanshilun
Assignee: fanshilun
 Fix For: 3.4.0


When I read the log, I think the Addr information of NN should be added to make 
the log information more complete.

The log is as follows:
{code:java}
2022-06-06 06:15:32,715 [BP-1990954485-172.17.0.2-1654496132136 heartbeating to 
localhost/127.0.0.1:42811] INFO  datanode.DataNode 
(BPServiceActor.java:register(819)) - Block pool 
BP-1990954485-172.17.0.2-1654496132136 (Datanode Uuid 
7d4b5459-6f2b-4203-bf6f-d31bfb9b6c3f) service to localhost/127.0.0.1:42811 
beginning handshake with NN.

2022-06-06 06:15:32,717 [BP-1990954485-172.17.0.2-1654496132136 heartbeating to 
localhost/127.0.0.1:42811] INFO  datanode.DataNode 
(BPServiceActor.java:register(847)) - Block pool 
BP-1990954485-172.17.0.2-1654496132136 (Datanode Uuid 
7d4b5459-6f2b-4203-bf6f-d31bfb9b6c3f) service to localhost/127.0.0.1:42811 
successfully registered with NN. {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16627) improve BPServiceActor#register Log Add NN Addr

2022-06-08 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16627:
-
Summary: improve BPServiceActor#register Log Add NN Addr  (was: improve 
BPServiceActor#register Log Add NN Info)

> improve BPServiceActor#register Log Add NN Addr
> ---
>
> Key: HDFS-16627
> URL: https://issues.apache.org/jira/browse/HDFS-16627
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0
>
>
> When I read the log, I think the Addr information of NN should be added to 
> make the log information more complete.
> The log is as follows:
> {code:java}
> 2022-06-06 06:15:32,715 [BP-1990954485-172.17.0.2-1654496132136 heartbeating 
> to localhost/127.0.0.1:42811] INFO  datanode.DataNode 
> (BPServiceActor.java:register(819)) - Block pool 
> BP-1990954485-172.17.0.2-1654496132136 (Datanode Uuid 
> 7d4b5459-6f2b-4203-bf6f-d31bfb9b6c3f) service to localhost/127.0.0.1:42811 
> beginning handshake with NN.
> 2022-06-06 06:15:32,717 [BP-1990954485-172.17.0.2-1654496132136 heartbeating 
> to localhost/127.0.0.1:42811] INFO  datanode.DataNode 
> (BPServiceActor.java:register(847)) - Block pool 
> BP-1990954485-172.17.0.2-1654496132136 (Datanode Uuid 
> 7d4b5459-6f2b-4203-bf6f-d31bfb9b6c3f) service to localhost/127.0.0.1:42811 
> successfully registered with NN. {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16627) improve BPServiceActor#register Log Add NN Addr

2022-06-08 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16627 started by fanshilun.

> improve BPServiceActor#register Log Add NN Addr
> ---
>
> Key: HDFS-16627
> URL: https://issues.apache.org/jira/browse/HDFS-16627
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When I read the log, I think the Addr information of NN should be added to 
> make the log information more complete.
> The log is as follows:
> {code:java}
> 2022-06-06 06:15:32,715 [BP-1990954485-172.17.0.2-1654496132136 heartbeating 
> to localhost/127.0.0.1:42811] INFO  datanode.DataNode 
> (BPServiceActor.java:register(819)) - Block pool 
> BP-1990954485-172.17.0.2-1654496132136 (Datanode Uuid 
> 7d4b5459-6f2b-4203-bf6f-d31bfb9b6c3f) service to localhost/127.0.0.1:42811 
> beginning handshake with NN.
> 2022-06-06 06:15:32,717 [BP-1990954485-172.17.0.2-1654496132136 heartbeating 
> to localhost/127.0.0.1:42811] INFO  datanode.DataNode 
> (BPServiceActor.java:register(847)) - Block pool 
> BP-1990954485-172.17.0.2-1654496132136 (Datanode Uuid 
> 7d4b5459-6f2b-4203-bf6f-d31bfb9b6c3f) service to localhost/127.0.0.1:42811 
> successfully registered with NN. {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15506) [JDK 11] Fix javadoc errors in hadoop-hdfs module

2022-06-09 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17552478#comment-17552478
 ] 

fanshilun commented on HDFS-15506:
--

Hi, [~aajisaka] I found some new java doc compilation failures in jdk11, I will 
fix it.

> [JDK 11] Fix javadoc errors in hadoop-hdfs module
> -
>
> Key: HDFS-15506
> URL: https://issues.apache.org/jira/browse/HDFS-15506
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15506.001.patch, HDFS-15506.002.patch
>
>
> {noformat}
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java:43:
>  error: self-closing element not allowed
> [ERROR]  * 
> [ERROR]^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java:682:
>  error: malformed HTML
> [ERROR]* a NameNode per second. Values <= 0 disable throttling. This 
> affects
> [ERROR]^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:1780:
>  error: exception not thrown: java.io.FileNotFoundException
> [ERROR]* @throws FileNotFoundException
> [ERROR]  ^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java:176:
>  error: @param name not found
> [ERROR]* @param mtime The snapshot creation time set by Time.now().
> [ERROR] ^
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:2187:
>  error: exception not thrown: java.lang.Exception
> [ERROR]* @exception Exception if the filesystem does not exist.
> [ERROR] ^
> {noformat}
> Full error log: 
> https://gist.github.com/aajisaka/a0c16f0408a623e798dd7df29fbddf82
> How to reproduce the failure:
> * Remove {{true}} from pom.xml
> * Run {{mvn process-sources javadoc:javadoc-no-fork}}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16629) [JDK 11] Fix javadoc warnings in hadoop-hdfs module

2022-06-09 Thread fanshilun (Jira)
fanshilun created HDFS-16629:


 Summary: [JDK 11] Fix javadoc  warnings in hadoop-hdfs module
 Key: HDFS-16629
 URL: https://issues.apache.org/jira/browse/HDFS-16629
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.4.0, 3.3.4
Reporter: fanshilun
Assignee: fanshilun
 Fix For: 3.4.0, 3.3.4


During compilation of the most recently committed code, a java doc waring 
appeared and I will fix it.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16629) [JDK 11] Fix javadoc warnings in hadoop-hdfs module

2022-06-09 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16629 started by fanshilun.

> [JDK 11] Fix javadoc  warnings in hadoop-hdfs module
> 
>
> Key: HDFS-16629
> URL: https://issues.apache.org/jira/browse/HDFS-16629
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0, 3.3.4
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0, 3.3.4
>
>
> During compilation of the most recently committed code, a java doc waring 
> appeared and I will fix it.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16629) [JDK 11] Fix javadoc warnings in hadoop-hdfs module

2022-06-09 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16629:
-
Description: 
During compilation of the most recently committed code, a java doc waring 
appeared and I will fix it.
{code:java}
1 error
100 warnings
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time:  37.132 s
[INFO] Finished at: 2022-06-09T17:07:12Z
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc-no-fork 
(default-cli) on project hadoop-hdfs: An error has occurred in Javadoc report 
generation: 
[ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
as HTML 4.01 by using the -html4 option.
[ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
removed
[ERROR] in a future release. To suppress this warning, please ensure that any 
HTML constructs {code}

  was:During compilation of the most recently committed code, a java doc waring 
appeared and I will fix it.


> [JDK 11] Fix javadoc  warnings in hadoop-hdfs module
> 
>
> Key: HDFS-16629
> URL: https://issues.apache.org/jira/browse/HDFS-16629
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0, 3.3.4
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
> Fix For: 3.4.0, 3.3.4
>
>
> During compilation of the most recently committed code, a java doc waring 
> appeared and I will fix it.
> {code:java}
> 1 error
> 100 warnings
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  37.132 s
> [INFO] Finished at: 2022-06-09T17:07:12Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc-no-fork 
> (default-cli) on project hadoop-hdfs: An error has occurred in Javadoc report 
> generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16563) Namenode WebUI prints sensitive information on Token Expiry

2022-06-10 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17553009#comment-17553009
 ] 

fanshilun commented on HDFS-16563:
--

causes[ MAPREDUCE-7387|https://issues.apache.org/jira/browse/MAPREDUCE-7387] 
too.

> Namenode WebUI prints sensitive information on Token Expiry
> ---
>
> Key: HDFS-16563
> URL: https://issues.apache.org/jira/browse/HDFS-16563
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namanode, security, webhdfs
>Affects Versions: 3.3.3
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
> Attachments: image-2022-04-27-23-01-16-033.png, 
> image-2022-04-27-23-28-40-568.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Login to Namenode WebUI.
> Wait for token to expire. (Or modify the Token refresh time 
> dfs.namenode.delegation.token.renew/update-interval to lower value)
> Refresh the WebUI after the Token expiry.
> Full token information gets printed in WebUI.
>  
> !image-2022-04-27-23-01-16-033.png!
> causes YARN-11172; all branches with this patch need that fix in too



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-16563) Namenode WebUI prints sensitive information on Token Expiry

2022-06-10 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17553009#comment-17553009
 ] 

fanshilun edited comment on HDFS-16563 at 6/11/22 12:37 AM:


causes MAPREDUCE-7387 too.


was (Author: slfan1989):
causes[ MAPREDUCE-7387|https://issues.apache.org/jira/browse/MAPREDUCE-7387] 
too.

> Namenode WebUI prints sensitive information on Token Expiry
> ---
>
> Key: HDFS-16563
> URL: https://issues.apache.org/jira/browse/HDFS-16563
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namanode, security, webhdfs
>Affects Versions: 3.3.3
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
> Attachments: image-2022-04-27-23-01-16-033.png, 
> image-2022-04-27-23-28-40-568.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Login to Namenode WebUI.
> Wait for token to expire. (Or modify the Token refresh time 
> dfs.namenode.delegation.token.renew/update-interval to lower value)
> Refresh the WebUI after the Token expiry.
> Full token information gets printed in WebUI.
>  
> !image-2022-04-27-23-01-16-033.png!
> causes YARN-11172; all branches with this patch need that fix in too



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16619) Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Input.

2022-06-10 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16619:
-
Summary: Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Input.  
(was: impove HttpHeaders.Values And HttpHeaders.Names With recommended Class)

> Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Input.
> --
>
> Key: HDFS-16619
> URL: https://issues.apache.org/jira/browse/HDFS-16619
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use 
> HttpHeaderValues ​​and HttpHeaderNames instead.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16619) Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.

2022-06-11 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16619:
-
Summary: Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.  
(was: Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Input.)

> Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.
> ---
>
> Key: HDFS-16619
> URL: https://issues.apache.org/jira/browse/HDFS-16619
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use 
> HttpHeaderValues ​​and HttpHeaderNames instead.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16627) Improve BPServiceActor#register log to add NameNode address

2022-06-11 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17553101#comment-17553101
 ] 

fanshilun commented on HDFS-16627:
--

[~hexiaoqiao] Thank you very much for your help reviewing the code and for your 
explanation!

> Improve BPServiceActor#register log to add NameNode address
> ---
>
> Key: HDFS-16627
> URL: https://issues.apache.org/jira/browse/HDFS-16627
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When I read the log, I think the Addr information of NN should be added to 
> make the log information more complete.
> The log is as follows:
> {code:java}
> 2022-06-06 06:15:32,715 [BP-1990954485-172.17.0.2-1654496132136 heartbeating 
> to localhost/127.0.0.1:42811] INFO  datanode.DataNode 
> (BPServiceActor.java:register(819)) - Block pool 
> BP-1990954485-172.17.0.2-1654496132136 (Datanode Uuid 
> 7d4b5459-6f2b-4203-bf6f-d31bfb9b6c3f) service to localhost/127.0.0.1:42811 
> beginning handshake with NN.
> 2022-06-06 06:15:32,717 [BP-1990954485-172.17.0.2-1654496132136 heartbeating 
> to localhost/127.0.0.1:42811] INFO  datanode.DataNode 
> (BPServiceActor.java:register(847)) - Block pool 
> BP-1990954485-172.17.0.2-1654496132136 (Datanode Uuid 
> 7d4b5459-6f2b-4203-bf6f-d31bfb9b6c3f) service to localhost/127.0.0.1:42811 
> successfully registered with NN. {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16631) Enable dfs.datanode.lockmanager.trace In Test

2022-06-14 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16631 started by fanshilun.

> Enable dfs.datanode.lockmanager.trace In Test
> -
>
> Key: HDFS-16631
> URL: https://issues.apache.org/jira/browse/HDFS-16631
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16631) Enable dfs.datanode.lockmanager.trace In Test

2022-06-14 Thread fanshilun (Jira)
fanshilun created HDFS-16631:


 Summary: Enable dfs.datanode.lockmanager.trace In Test
 Key: HDFS-16631
 URL: https://issues.apache.org/jira/browse/HDFS-16631
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: fanshilun
Assignee: fanshilun






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16631) Enable dfs.datanode.lockmanager.trace In Test

2022-06-14 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16631:
-
Description: 
In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
deadlock, this is a very meaningful discussion, I was reading the log and found 
the following:
{code:java}
2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
 not open lock leak check func.{code}
Looking at the code, I found that there is such a parameter:
{code:java}

    dfs.datanode.lockmanager.trace
    false
    
      If this is true, after shut down datanode lock Manager will print all leak
      thread that not release by lock Manager. Only used for test or trace dead 
lock
      problem. In produce default set false, because it's have little 
performance loss.
    
   {code}
I think this parameter should be added in the test environment, so that if 
there is a DN deadlock, the cause can be quickly located.

 

> Enable dfs.datanode.lockmanager.trace In Test
> -
>
> Key: HDFS-16631
> URL: https://issues.apache.org/jira/browse/HDFS-16631
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>
> In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
> deadlock, this is a very meaningful discussion, I was reading the log and 
> found the following:
> {code:java}
> 2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
> datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
>  not open lock leak check func.{code}
> Looking at the code, I found that there is such a parameter:
> {code:java}
> 
>     dfs.datanode.lockmanager.trace
>     false
>     
>       If this is true, after shut down datanode lock Manager will print all 
> leak
>       thread that not release by lock Manager. Only used for test or trace 
> dead lock
>       problem. In produce default set false, because it's have little 
> performance loss.
>     
>    {code}
> I think this parameter should be added in the test environment, so that if 
> there is a DN deadlock, the cause can be quickly located.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16619) Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.

2022-06-14 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16619:
-
Attachment: Fix HttpHeaders.Values And HttpHeaders.Names Deprecated.png

> Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.
> ---
>
> Key: HDFS-16619
> URL: https://issues.apache.org/jira/browse/HDFS-16619
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Fix HttpHeaders.Values And HttpHeaders.Names 
> Deprecated.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use 
> HttpHeaderValues ​​and HttpHeaderNames instead.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16619) Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.

2022-06-14 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16619:
-
Description: 
HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use HttpHeaderValues 
​​and HttpHeaderNames instead.

HttpHeaders.Names
{code:java}
/** @deprecated */
@Deprecated
public static final class Names {
  public static final String ACCEPT = "Accept";
  public static final String ACCEPT_CHARSET = "Accept-Charset";
  public static final String ACCEPT_ENCODING = "Accept-Encoding";
  public static final String ACCEPT_LANGUAGE = "Accept-Language";
  public static final String ACCEPT_RANGES = "Accept-Ranges";
  public static final String ACCEPT_PATCH = "Accept-Patch";
  public static final String ACCESS_CONTROL_ALLOW_CREDENTIALS = 
"Access-Control-Allow-Credentials";
  public static final String ACCESS_CONTROL_ALLOW_HEADERS = 
"Access-Control-Allow-Headers"; {code}

  was:HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use 
HttpHeaderValues ​​and HttpHeaderNames instead.


> Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.
> ---
>
> Key: HDFS-16619
> URL: https://issues.apache.org/jira/browse/HDFS-16619
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Fix HttpHeaders.Values And HttpHeaders.Names 
> Deprecated.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use 
> HttpHeaderValues ​​and HttpHeaderNames instead.
> HttpHeaders.Names
> {code:java}
> /** @deprecated */
> @Deprecated
> public static final class Names {
>   public static final String ACCEPT = "Accept";
>   public static final String ACCEPT_CHARSET = "Accept-Charset";
>   public static final String ACCEPT_ENCODING = "Accept-Encoding";
>   public static final String ACCEPT_LANGUAGE = "Accept-Language";
>   public static final String ACCEPT_RANGES = "Accept-Ranges";
>   public static final String ACCEPT_PATCH = "Accept-Patch";
>   public static final String ACCESS_CONTROL_ALLOW_CREDENTIALS = 
> "Access-Control-Allow-Credentials";
>   public static final String ACCESS_CONTROL_ALLOW_HEADERS = 
> "Access-Control-Allow-Headers"; {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16619) Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.

2022-06-14 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16619:
-
Description: 
HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use HttpHeaderValues 
​​and HttpHeaderNames instead.

HttpHeaders.Names
{code:java}
/** @deprecated */
@Deprecated
public static final class Names {
  public static final String ACCEPT = "Accept";
  public static final String ACCEPT_CHARSET = "Accept-Charset";
  public static final String ACCEPT_ENCODING = "Accept-Encoding";
  public static final String ACCEPT_LANGUAGE = "Accept-Language";
  public static final String ACCEPT_RANGES = "Accept-Ranges";
  public static final String ACCEPT_PATCH = "Accept-Patch";
  public static final String ACCESS_CONTROL_ALLOW_CREDENTIALS = 
"Access-Control-Allow-Credentials";
  public static final String ACCESS_CONTROL_ALLOW_HEADERS = 
"Access-Control-Allow-Headers"; {code}
HttpHeaders.Values
{code:java}
/** @deprecated */
@Deprecated
public static final class Values {
  public static final String APPLICATION_JSON = "application/json";
  public static final String APPLICATION_X_WWW_FORM_URLENCODED = 
"application/x-www-form-urlencoded";
  public static final String BASE64 = "base64";
  public static final String BINARY = "binary";
  public static final String BOUNDARY = "boundary";
  public static final String BYTES = "bytes";
  public static final String CHARSET = "charset";
  public static final String CHUNKED = "chunked";
  public static final String CLOSE = "close"; {code}

  was:
HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use HttpHeaderValues 
​​and HttpHeaderNames instead.

HttpHeaders.Names
{code:java}
/** @deprecated */
@Deprecated
public static final class Names {
  public static final String ACCEPT = "Accept";
  public static final String ACCEPT_CHARSET = "Accept-Charset";
  public static final String ACCEPT_ENCODING = "Accept-Encoding";
  public static final String ACCEPT_LANGUAGE = "Accept-Language";
  public static final String ACCEPT_RANGES = "Accept-Ranges";
  public static final String ACCEPT_PATCH = "Accept-Patch";
  public static final String ACCESS_CONTROL_ALLOW_CREDENTIALS = 
"Access-Control-Allow-Credentials";
  public static final String ACCESS_CONTROL_ALLOW_HEADERS = 
"Access-Control-Allow-Headers"; {code}


> Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.
> ---
>
> Key: HDFS-16619
> URL: https://issues.apache.org/jira/browse/HDFS-16619
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Fix HttpHeaders.Values And HttpHeaders.Names 
> Deprecated.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use 
> HttpHeaderValues ​​and HttpHeaderNames instead.
> HttpHeaders.Names
> {code:java}
> /** @deprecated */
> @Deprecated
> public static final class Names {
>   public static final String ACCEPT = "Accept";
>   public static final String ACCEPT_CHARSET = "Accept-Charset";
>   public static final String ACCEPT_ENCODING = "Accept-Encoding";
>   public static final String ACCEPT_LANGUAGE = "Accept-Language";
>   public static final String ACCEPT_RANGES = "Accept-Ranges";
>   public static final String ACCEPT_PATCH = "Accept-Patch";
>   public static final String ACCESS_CONTROL_ALLOW_CREDENTIALS = 
> "Access-Control-Allow-Credentials";
>   public static final String ACCESS_CONTROL_ALLOW_HEADERS = 
> "Access-Control-Allow-Headers"; {code}
> HttpHeaders.Values
> {code:java}
> /** @deprecated */
> @Deprecated
> public static final class Values {
>   public static final String APPLICATION_JSON = "application/json";
>   public static final String APPLICATION_X_WWW_FORM_URLENCODED = 
> "application/x-www-form-urlencoded";
>   public static final String BASE64 = "base64";
>   public static final String BINARY = "binary";
>   public static final String BOUNDARY = "boundary";
>   public static final String BYTES = "bytes";
>   public static final String CHARSET = "charset";
>   public static final String CHUNKED = "chunked";
>   public static final String CLOSE = "close"; {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16619) Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.

2022-06-14 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16619:
-
Description: 
HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use HttpHeaderValues 
​​and HttpHeaderNames instead.

HttpHeaders.Names

Deprecated. 
Use HttpHeaderNames instead. Standard HTTP header names.
{code:java}
/** @deprecated */
@Deprecated
public static final class Names {
  public static final String ACCEPT = "Accept";
  public static final String ACCEPT_CHARSET = "Accept-Charset";
  public static final String ACCEPT_ENCODING = "Accept-Encoding";
  public static final String ACCEPT_LANGUAGE = "Accept-Language";
  public static final String ACCEPT_RANGES = "Accept-Ranges";
  public static final String ACCEPT_PATCH = "Accept-Patch";
  public static final String ACCESS_CONTROL_ALLOW_CREDENTIALS = 
"Access-Control-Allow-Credentials";
  public static final String ACCESS_CONTROL_ALLOW_HEADERS = 
"Access-Control-Allow-Headers"; {code}
HttpHeaders.Values
Deprecated. 
Use HttpHeaderValues instead. Standard HTTP header values.
{code:java}
/** @deprecated */
@Deprecated
public static final class Values {
  public static final String APPLICATION_JSON = "application/json";
  public static final String APPLICATION_X_WWW_FORM_URLENCODED = 
"application/x-www-form-urlencoded";
  public static final String BASE64 = "base64";
  public static final String BINARY = "binary";
  public static final String BOUNDARY = "boundary";
  public static final String BYTES = "bytes";
  public static final String CHARSET = "charset";
  public static final String CHUNKED = "chunked";
  public static final String CLOSE = "close"; {code}

  was:
HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use HttpHeaderValues 
​​and HttpHeaderNames instead.

HttpHeaders.Names
{code:java}
/** @deprecated */
@Deprecated
public static final class Names {
  public static final String ACCEPT = "Accept";
  public static final String ACCEPT_CHARSET = "Accept-Charset";
  public static final String ACCEPT_ENCODING = "Accept-Encoding";
  public static final String ACCEPT_LANGUAGE = "Accept-Language";
  public static final String ACCEPT_RANGES = "Accept-Ranges";
  public static final String ACCEPT_PATCH = "Accept-Patch";
  public static final String ACCESS_CONTROL_ALLOW_CREDENTIALS = 
"Access-Control-Allow-Credentials";
  public static final String ACCESS_CONTROL_ALLOW_HEADERS = 
"Access-Control-Allow-Headers"; {code}
HttpHeaders.Values
{code:java}
/** @deprecated */
@Deprecated
public static final class Values {
  public static final String APPLICATION_JSON = "application/json";
  public static final String APPLICATION_X_WWW_FORM_URLENCODED = 
"application/x-www-form-urlencoded";
  public static final String BASE64 = "base64";
  public static final String BINARY = "binary";
  public static final String BOUNDARY = "boundary";
  public static final String BYTES = "bytes";
  public static final String CHARSET = "charset";
  public static final String CHUNKED = "chunked";
  public static final String CLOSE = "close"; {code}


> Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.
> ---
>
> Key: HDFS-16619
> URL: https://issues.apache.org/jira/browse/HDFS-16619
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Fix HttpHeaders.Values And HttpHeaders.Names 
> Deprecated.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use 
> HttpHeaderValues ​​and HttpHeaderNames instead.
> HttpHeaders.Names
> Deprecated. 
> Use HttpHeaderNames instead. Standard HTTP header names.
> {code:java}
> /** @deprecated */
> @Deprecated
> public static final class Names {
>   public static final String ACCEPT = "Accept";
>   public static final String ACCEPT_CHARSET = "Accept-Charset";
>   public static final String ACCEPT_ENCODING = "Accept-Encoding";
>   public static final String ACCEPT_LANGUAGE = "Accept-Language";
>   public static final String ACCEPT_RANGES = "Accept-Ranges";
>   public static final String ACCEPT_PATCH = "Accept-Patch";
>   public static final String ACCESS_CONTROL_ALLOW_CREDENTIALS = 
> "Access-Control-Allow-Credentials";
>   public static final String ACCESS_CONTROL_ALLOW_HEADERS = 
> "Access-Control-Allow-Headers"; {code}
> HttpHeaders.Values
> Deprecated. 
> Use HttpHeaderValues instead. Standard HTTP header values.
> {code:java}
> /** @deprecated */
> @Deprecated
> public static final class Values {
>   public static final String APPLICATION_JSON = "application/json";
>   public static final String APPLICATION_X_WWW_FORM_URLENCODED = 
>

[jira] [Resolved] (HDFS-16629) [JDK 11] Fix javadoc warnings in hadoop-hdfs module

2022-06-17 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun resolved HDFS-16629.
--
Resolution: Fixed

> [JDK 11] Fix javadoc  warnings in hadoop-hdfs module
> 
>
> Key: HDFS-16629
> URL: https://issues.apache.org/jira/browse/HDFS-16629
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0, 3.3.4
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> During compilation of the most recently committed code, a java doc waring 
> appeared and I will fix it.
> {code:java}
> 1 error
> 100 warnings
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  37.132 s
> [INFO] Finished at: 2022-06-09T17:07:12Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc-no-fork 
> (default-cli) on project hadoop-hdfs: An error has occurred in Javadoc report 
> generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16631) Enable dfs.datanode.lockmanager.trace In Test

2022-06-17 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16631:
-
Description: 
In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
deadlock, this is a very meaningful discussion, I was reading the log and found 
the following:
{code:java}
2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
 not open lock leak check func.{code}
Looking at the code, I found that there is such a parameter:
{code:java}

    dfs.datanode.lockmanager.trace
    false
    
      If this is true, after shut down datanode lock Manager will print all leak
      thread that not release by lock Manager. Only used for test or trace dead 
lock
      problem. In produce default set false, because it's have little 
performance loss.
    
   {code}
I think this parameter should be added in the test environment, so that if 
there is a DN deadlock, the cause can be quickly located.

According to suggestions, the following modifications are made:

1. On the read and write lock related methods of DataSetLockManager, add the 
operation name to clearly indicate the source of the lock, which is convenient 
for public use.
2. Increase the granularity of indicator monitoring, including the number of 
locks, the time of locks, and the early warning of locks.

 

  was:
In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
deadlock, this is a very meaningful discussion, I was reading the log and found 
the following:
{code:java}
2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
 not open lock leak check func.{code}
Looking at the code, I found that there is such a parameter:
{code:java}

    dfs.datanode.lockmanager.trace
    false
    
      If this is true, after shut down datanode lock Manager will print all leak
      thread that not release by lock Manager. Only used for test or trace dead 
lock
      problem. In produce default set false, because it's have little 
performance loss.
    
   {code}
I think this parameter should be added in the test environment, so that if 
there is a DN deadlock, the cause can be quickly located.

 


> Enable dfs.datanode.lockmanager.trace In Test
> -
>
> Key: HDFS-16631
> URL: https://issues.apache.org/jira/browse/HDFS-16631
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2022-06-18-09-49-28-725.png
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
> deadlock, this is a very meaningful discussion, I was reading the log and 
> found the following:
> {code:java}
> 2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
> datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
>  not open lock leak check func.{code}
> Looking at the code, I found that there is such a parameter:
> {code:java}
> 
>     dfs.datanode.lockmanager.trace
>     false
>     
>       If this is true, after shut down datanode lock Manager will print all 
> leak
>       thread that not release by lock Manager. Only used for test or trace 
> dead lock
>       problem. In produce default set false, because it's have little 
> performance loss.
>     
>    {code}
> I think this parameter should be added in the test environment, so that if 
> there is a DN deadlock, the cause can be quickly located.
> According to suggestions, the following modifications are made:
> 1. On the read and write lock related methods of DataSetLockManager, add the 
> operation name to clearly indicate the source of the lock, which is 
> convenient for public use.
> 2. Increase the granularity of indicator monitoring, including the number of 
> locks, the time of locks, and the early warning of locks.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16631) Enable dfs.datanode.lockmanager.trace In Test

2022-06-17 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16631:
-
Attachment: image-2022-06-18-09-49-28-725.png

> Enable dfs.datanode.lockmanager.trace In Test
> -
>
> Key: HDFS-16631
> URL: https://issues.apache.org/jira/browse/HDFS-16631
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2022-06-18-09-49-28-725.png
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
> deadlock, this is a very meaningful discussion, I was reading the log and 
> found the following:
> {code:java}
> 2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
> datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
>  not open lock leak check func.{code}
> Looking at the code, I found that there is such a parameter:
> {code:java}
> 
>     dfs.datanode.lockmanager.trace
>     false
>     
>       If this is true, after shut down datanode lock Manager will print all 
> leak
>       thread that not release by lock Manager. Only used for test or trace 
> dead lock
>       problem. In produce default set false, because it's have little 
> performance loss.
>     
>    {code}
> I think this parameter should be added in the test environment, so that if 
> there is a DN deadlock, the cause can be quickly located.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16694) Fix missing package-info in hadoop-hdfs moudle.

2022-07-27 Thread fanshilun (Jira)
fanshilun created HDFS-16694:


 Summary: Fix missing package-info in hadoop-hdfs moudle.
 Key: HDFS-16694
 URL: https://issues.apache.org/jira/browse/HDFS-16694
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.4.0, 3.3.4
Reporter: fanshilun
Assignee: fanshilun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2139) Fast copy for HDFS.

2022-08-24 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17584145#comment-17584145
 ] 

fanshilun commented on HDFS-2139:
-

[~xuzq_zander]

Very happy that this feature can be restarted, but there are the following 
problems:
  1. Is there enough performance test data for HDFS-15294? What is the expected 
performance improvement of HDFS-2139 after implementation? 
  2. It seems that the planning of tasks in the design document is not very 
clear. Can you explain the specific transformation content of each task in 
detail?



 

> Fast copy for HDFS.
> ---
>
> Key: HDFS-2139
> URL: https://issues.apache.org/jira/browse/HDFS-2139
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Pritam Damania
>Assignee: ZanderXu
>Priority: Major
> Attachments: HDFS-2139-For-2.7.1.patch, HDFS-2139.patch, 
> HDFS-2139.patch, image-2022-08-11-11-48-17-994.png
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> There is a need to perform fast file copy on HDFS. The fast copy mechanism 
> for a file works as
> follows :
> 1) Query metadata for all blocks of the source file.
> 2) For each block 'b' of the file, find out its datanode locations.
> 3) For each block of the file, add an empty block to the namesystem for
> the destination file.
> 4) For each location of the block, instruct the datanode to make a local
> copy of that block.
> 5) Once each datanode has copied over its respective blocks, they
> report to the namenode about it.
> 6) Wait for all blocks to be copied and exit.
> This would speed up the copying process considerably by removing top of
> the rack data transfers.
> Note : An extra improvement, would be to instruct the datanode to create a
> hardlink of the block file if we are copying a block on the same datanode
> [~xuzq_zander]Provided a design doc 
> https://docs.google.com/document/d/1OHdUpQmKD3TZ3xdmQsXNmlXJetn2QFPinMH31Q4BqkI/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-2139) Fast copy for HDFS.

2022-08-24 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17584145#comment-17584145
 ] 

fanshilun edited comment on HDFS-2139 at 8/24/22 9:42 AM:
--

[~xuzq_zander]

Very happy that this feature can be restarted, but there are the following 
question:
  1. Is there enough performance test data for HDFS-15294? What is the expected 
performance improvement of HDFS-2139 after implementation? 
  2. It seems that the planning of tasks in the design document is not very 
clear. Can you explain the specific transformation content of each task in 
detail?

 


was (Author: slfan1989):
[~xuzq_zander]

Very happy that this feature can be restarted, but there are the following 
problems:
  1. Is there enough performance test data for HDFS-15294? What is the expected 
performance improvement of HDFS-2139 after implementation? 
  2. It seems that the planning of tasks in the design document is not very 
clear. Can you explain the specific transformation content of each task in 
detail?



 

> Fast copy for HDFS.
> ---
>
> Key: HDFS-2139
> URL: https://issues.apache.org/jira/browse/HDFS-2139
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Pritam Damania
>Assignee: ZanderXu
>Priority: Major
> Attachments: HDFS-2139-For-2.7.1.patch, HDFS-2139.patch, 
> HDFS-2139.patch, image-2022-08-11-11-48-17-994.png
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> There is a need to perform fast file copy on HDFS. The fast copy mechanism 
> for a file works as
> follows :
> 1) Query metadata for all blocks of the source file.
> 2) For each block 'b' of the file, find out its datanode locations.
> 3) For each block of the file, add an empty block to the namesystem for
> the destination file.
> 4) For each location of the block, instruct the datanode to make a local
> copy of that block.
> 5) Once each datanode has copied over its respective blocks, they
> report to the namenode about it.
> 6) Wait for all blocks to be copied and exit.
> This would speed up the copying process considerably by removing top of
> the rack data transfers.
> Note : An extra improvement, would be to instruct the datanode to create a
> hardlink of the block file if we are copying a block on the same datanode
> [~xuzq_zander]Provided a design doc 
> https://docs.google.com/document/d/1OHdUpQmKD3TZ3xdmQsXNmlXJetn2QFPinMH31Q4BqkI/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-2139) Fast copy for HDFS.

2022-08-24 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17584145#comment-17584145
 ] 

fanshilun edited comment on HDFS-2139 at 8/24/22 9:44 AM:
--

[~xuzq_zander]

Very happy that this feature can be restarted, but there are the following 
question:
  1. Is there enough performance test data for HDFS-15294? What is the expected 
performance improvement of HDFS-2139 after implementation? 
  2. It seems that the planning of tasks in the design document is not very 
clear. Can you explain the specific transformation content of each task in 
detail?

 

Task1: Add a new method LocalBlockCopyViaHardLink to Datanode

This doesn't seem to be described in the documentation


was (Author: slfan1989):
[~xuzq_zander]

Very happy that this feature can be restarted, but there are the following 
question:
  1. Is there enough performance test data for HDFS-15294? What is the expected 
performance improvement of HDFS-2139 after implementation? 
  2. It seems that the planning of tasks in the design document is not very 
clear. Can you explain the specific transformation content of each task in 
detail?

 

> Fast copy for HDFS.
> ---
>
> Key: HDFS-2139
> URL: https://issues.apache.org/jira/browse/HDFS-2139
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Pritam Damania
>Assignee: ZanderXu
>Priority: Major
> Attachments: HDFS-2139-For-2.7.1.patch, HDFS-2139.patch, 
> HDFS-2139.patch, image-2022-08-11-11-48-17-994.png
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> There is a need to perform fast file copy on HDFS. The fast copy mechanism 
> for a file works as
> follows :
> 1) Query metadata for all blocks of the source file.
> 2) For each block 'b' of the file, find out its datanode locations.
> 3) For each block of the file, add an empty block to the namesystem for
> the destination file.
> 4) For each location of the block, instruct the datanode to make a local
> copy of that block.
> 5) Once each datanode has copied over its respective blocks, they
> report to the namenode about it.
> 6) Wait for all blocks to be copied and exit.
> This would speed up the copying process considerably by removing top of
> the rack data transfers.
> Note : An extra improvement, would be to instruct the datanode to create a
> hardlink of the block file if we are copying a block on the same datanode
> [~xuzq_zander]Provided a design doc 
> https://docs.google.com/document/d/1OHdUpQmKD3TZ3xdmQsXNmlXJetn2QFPinMH31Q4BqkI/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2139) Fast copy for HDFS.

2022-08-24 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17584147#comment-17584147
 ] 

fanshilun commented on HDFS-2139:
-

[~ferhui] Personally, this jira has helped a lot of people, I think we should 
keep the original Assignee of this jira, should we create subtasks and assign 
them?

> Fast copy for HDFS.
> ---
>
> Key: HDFS-2139
> URL: https://issues.apache.org/jira/browse/HDFS-2139
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Pritam Damania
>Assignee: ZanderXu
>Priority: Major
> Attachments: HDFS-2139-For-2.7.1.patch, HDFS-2139.patch, 
> HDFS-2139.patch, image-2022-08-11-11-48-17-994.png
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> There is a need to perform fast file copy on HDFS. The fast copy mechanism 
> for a file works as
> follows :
> 1) Query metadata for all blocks of the source file.
> 2) For each block 'b' of the file, find out its datanode locations.
> 3) For each block of the file, add an empty block to the namesystem for
> the destination file.
> 4) For each location of the block, instruct the datanode to make a local
> copy of that block.
> 5) Once each datanode has copied over its respective blocks, they
> report to the namenode about it.
> 6) Wait for all blocks to be copied and exit.
> This would speed up the copying process considerably by removing top of
> the rack data transfers.
> Note : An extra improvement, would be to instruct the datanode to create a
> hardlink of the block file if we are copying a block on the same datanode
> [~xuzq_zander]Provided a design doc 
> https://docs.google.com/document/d/1OHdUpQmKD3TZ3xdmQsXNmlXJetn2QFPinMH31Q4BqkI/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-2139) Fast copy for HDFS.

2022-08-24 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17584147#comment-17584147
 ] 

fanshilun edited comment on HDFS-2139 at 8/24/22 9:46 AM:
--

[~ferhui] [~xuzq_zander] Personally, this jira has helped a lot of people, I 
think we should keep the original Assignee of this jira, should we create 
subtasks and assign them?


was (Author: slfan1989):
[~ferhui] Personally, this jira has helped a lot of people, I think we should 
keep the original Assignee of this jira, should we create subtasks and assign 
them?

> Fast copy for HDFS.
> ---
>
> Key: HDFS-2139
> URL: https://issues.apache.org/jira/browse/HDFS-2139
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Pritam Damania
>Assignee: ZanderXu
>Priority: Major
> Attachments: HDFS-2139-For-2.7.1.patch, HDFS-2139.patch, 
> HDFS-2139.patch, image-2022-08-11-11-48-17-994.png
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> There is a need to perform fast file copy on HDFS. The fast copy mechanism 
> for a file works as
> follows :
> 1) Query metadata for all blocks of the source file.
> 2) For each block 'b' of the file, find out its datanode locations.
> 3) For each block of the file, add an empty block to the namesystem for
> the destination file.
> 4) For each location of the block, instruct the datanode to make a local
> copy of that block.
> 5) Once each datanode has copied over its respective blocks, they
> report to the namenode about it.
> 6) Wait for all blocks to be copied and exit.
> This would speed up the copying process considerably by removing top of
> the rack data transfers.
> Note : An extra improvement, would be to instruct the datanode to create a
> hardlink of the block file if we are copying a block on the same datanode
> [~xuzq_zander]Provided a design doc 
> https://docs.google.com/document/d/1OHdUpQmKD3TZ3xdmQsXNmlXJetn2QFPinMH31Q4BqkI/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-2139) Fast copy for HDFS.

2022-08-24 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17584147#comment-17584147
 ] 

fanshilun edited comment on HDFS-2139 at 8/25/22 2:12 AM:
--

[~ferhui] [~xuzq_zander] Personally, this jira has helped a lot of people, I 
think we should keep the original Assignee of this jira, should we create 
subtasks and assign them?

Just a personal opinion, We still focus on the fastcopy feature itself, I hope 
this feature can help more partners who use hdfs, thanks again [~ferhui] 
[~xuzq_zander]  for his contribution to this feature.

 


was (Author: slfan1989):
[~ferhui] [~xuzq_zander] Personally, this jira has helped a lot of people, I 
think we should keep the original Assignee of this jira, should we create 
subtasks and assign them?

> Fast copy for HDFS.
> ---
>
> Key: HDFS-2139
> URL: https://issues.apache.org/jira/browse/HDFS-2139
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Pritam Damania
>Assignee: ZanderXu
>Priority: Major
> Attachments: HDFS-2139-For-2.7.1.patch, HDFS-2139.patch, 
> HDFS-2139.patch, image-2022-08-11-11-48-17-994.png
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> There is a need to perform fast file copy on HDFS. The fast copy mechanism 
> for a file works as
> follows :
> 1) Query metadata for all blocks of the source file.
> 2) For each block 'b' of the file, find out its datanode locations.
> 3) For each block of the file, add an empty block to the namesystem for
> the destination file.
> 4) For each location of the block, instruct the datanode to make a local
> copy of that block.
> 5) Once each datanode has copied over its respective blocks, they
> report to the namenode about it.
> 6) Wait for all blocks to be copied and exit.
> This would speed up the copying process considerably by removing top of
> the rack data transfers.
> Note : An extra improvement, would be to instruct the datanode to create a
> hardlink of the block file if we are copying a block on the same datanode
> [~xuzq_zander]Provided a design doc 
> https://docs.google.com/document/d/1OHdUpQmKD3TZ3xdmQsXNmlXJetn2QFPinMH31Q4BqkI/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-2139) Fast copy for HDFS.

2022-08-24 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17584147#comment-17584147
 ] 

fanshilun edited comment on HDFS-2139 at 8/25/22 2:13 AM:
--

[~ferhui] [~xuzq_zander] Personally, this jira has helped a lot of people, I 
think we should keep the original Assignee of this jira, should we create 
subtasks and assign them?

The above is just a personal opinion, not the key point.

We still focus on the fastcopy feature itself, I hope this feature can help 
more partners who use hdfs, thanks again [~ferhui] [~xuzq_zander]  for his 
contribution to this feature.

 


was (Author: slfan1989):
[~ferhui] [~xuzq_zander] Personally, this jira has helped a lot of people, I 
think we should keep the original Assignee of this jira, should we create 
subtasks and assign them?

Just a personal opinion, We still focus on the fastcopy feature itself, I hope 
this feature can help more partners who use hdfs, thanks again [~ferhui] 
[~xuzq_zander]  for his contribution to this feature.

 

> Fast copy for HDFS.
> ---
>
> Key: HDFS-2139
> URL: https://issues.apache.org/jira/browse/HDFS-2139
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Pritam Damania
>Assignee: ZanderXu
>Priority: Major
> Attachments: HDFS-2139-For-2.7.1.patch, HDFS-2139.patch, 
> HDFS-2139.patch, image-2022-08-11-11-48-17-994.png
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> There is a need to perform fast file copy on HDFS. The fast copy mechanism 
> for a file works as
> follows :
> 1) Query metadata for all blocks of the source file.
> 2) For each block 'b' of the file, find out its datanode locations.
> 3) For each block of the file, add an empty block to the namesystem for
> the destination file.
> 4) For each location of the block, instruct the datanode to make a local
> copy of that block.
> 5) Once each datanode has copied over its respective blocks, they
> report to the namenode about it.
> 6) Wait for all blocks to be copied and exit.
> This would speed up the copying process considerably by removing top of
> the rack data transfers.
> Note : An extra improvement, would be to instruct the datanode to create a
> hardlink of the block file if we are copying a block on the same datanode
> [~xuzq_zander]Provided a design doc 
> https://docs.google.com/document/d/1OHdUpQmKD3TZ3xdmQsXNmlXJetn2QFPinMH31Q4BqkI/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-2139) Fast copy for HDFS.

2022-08-24 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17584147#comment-17584147
 ] 

fanshilun edited comment on HDFS-2139 at 8/25/22 2:16 AM:
--

[~ferhui] [~xuzq_zander] Personally, this jira has helped a lot of people, I 
think we should keep the original Assignee of this jira, should we create 
subtasks and assign them?

The above is just a personal opinion, not the key point.

We still focus on the fastcopy feature itself, I hope this feature can help 
more partners who use hdfs, thanks again [~ferhui] [~xuzq_zander]  for your 
contribution to this feature.

 


was (Author: slfan1989):
[~ferhui] [~xuzq_zander] Personally, this jira has helped a lot of people, I 
think we should keep the original Assignee of this jira, should we create 
subtasks and assign them?

The above is just a personal opinion, not the key point.

We still focus on the fastcopy feature itself, I hope this feature can help 
more partners who use hdfs, thanks again [~ferhui] [~xuzq_zander]  for his 
contribution to this feature.

 

> Fast copy for HDFS.
> ---
>
> Key: HDFS-2139
> URL: https://issues.apache.org/jira/browse/HDFS-2139
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Pritam Damania
>Assignee: ZanderXu
>Priority: Major
> Attachments: HDFS-2139-For-2.7.1.patch, HDFS-2139.patch, 
> HDFS-2139.patch, image-2022-08-11-11-48-17-994.png
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> There is a need to perform fast file copy on HDFS. The fast copy mechanism 
> for a file works as
> follows :
> 1) Query metadata for all blocks of the source file.
> 2) For each block 'b' of the file, find out its datanode locations.
> 3) For each block of the file, add an empty block to the namesystem for
> the destination file.
> 4) For each location of the block, instruct the datanode to make a local
> copy of that block.
> 5) Once each datanode has copied over its respective blocks, they
> report to the namenode about it.
> 6) Wait for all blocks to be copied and exit.
> This would speed up the copying process considerably by removing top of
> the rack data transfers.
> Note : An extra improvement, would be to instruct the datanode to create a
> hardlink of the block file if we are copying a block on the same datanode
> [~xuzq_zander]Provided a design doc 
> https://docs.google.com/document/d/1OHdUpQmKD3TZ3xdmQsXNmlXJetn2QFPinMH31Q4BqkI/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-2139) Fast copy for HDFS.

2022-08-25 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17585081#comment-17585081
 ] 

fanshilun edited comment on HDFS-2139 at 8/26/22 1:09 AM:
--

[~ferhui] Thank you very much for your detailed explanation.Very much looking 
forward to your completion of this feature.

Thanks again for your contribution!

[~ferhui] [~xuzq_zander] 


was (Author: slfan1989):
[~ferhui] Thank you very much for your detailed explanation!

Very much looking forward to your completion of this feature!

Thanks again for your contribution!!!

[~ferhui] [~xuzq_zander] 

> Fast copy for HDFS.
> ---
>
> Key: HDFS-2139
> URL: https://issues.apache.org/jira/browse/HDFS-2139
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Pritam Damania
>Assignee: Rituraj
>Priority: Major
> Attachments: HDFS-2139-For-2.7.1.patch, HDFS-2139.patch, 
> HDFS-2139.patch, image-2022-08-11-11-48-17-994.png
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> There is a need to perform fast file copy on HDFS. The fast copy mechanism 
> for a file works as
> follows :
> 1) Query metadata for all blocks of the source file.
> 2) For each block 'b' of the file, find out its datanode locations.
> 3) For each block of the file, add an empty block to the namesystem for
> the destination file.
> 4) For each location of the block, instruct the datanode to make a local
> copy of that block.
> 5) Once each datanode has copied over its respective blocks, they
> report to the namenode about it.
> 6) Wait for all blocks to be copied and exit.
> This would speed up the copying process considerably by removing top of
> the rack data transfers.
> Note : An extra improvement, would be to instruct the datanode to create a
> hardlink of the block file if we are copying a block on the same datanode
> [~xuzq_zander]Provided a design doc 
> https://docs.google.com/document/d/1OHdUpQmKD3TZ3xdmQsXNmlXJetn2QFPinMH31Q4BqkI/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2139) Fast copy for HDFS.

2022-08-25 Thread fanshilun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17585081#comment-17585081
 ] 

fanshilun commented on HDFS-2139:
-

[~ferhui] Thank you very much for your detailed explanation!

Very much looking forward to your completion of this feature!

Thanks again for your contribution!!!

[~ferhui] [~xuzq_zander] 

> Fast copy for HDFS.
> ---
>
> Key: HDFS-2139
> URL: https://issues.apache.org/jira/browse/HDFS-2139
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Pritam Damania
>Assignee: Rituraj
>Priority: Major
> Attachments: HDFS-2139-For-2.7.1.patch, HDFS-2139.patch, 
> HDFS-2139.patch, image-2022-08-11-11-48-17-994.png
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> There is a need to perform fast file copy on HDFS. The fast copy mechanism 
> for a file works as
> follows :
> 1) Query metadata for all blocks of the source file.
> 2) For each block 'b' of the file, find out its datanode locations.
> 3) For each block of the file, add an empty block to the namesystem for
> the destination file.
> 4) For each location of the block, instruct the datanode to make a local
> copy of that block.
> 5) Once each datanode has copied over its respective blocks, they
> report to the namenode about it.
> 6) Wait for all blocks to be copied and exit.
> This would speed up the copying process considerably by removing top of
> the rack data transfers.
> Note : An extra improvement, would be to instruct the datanode to create a
> hardlink of the block file if we are copying a block on the same datanode
> [~xuzq_zander]Provided a design doc 
> https://docs.google.com/document/d/1OHdUpQmKD3TZ3xdmQsXNmlXJetn2QFPinMH31Q4BqkI/edit?usp=sharing



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16797) Remove WhiteBox in hdfs module.

2022-10-06 Thread fanshilun (Jira)
fanshilun created HDFS-16797:


 Summary: Remove WhiteBox in hdfs module.
 Key: HDFS-16797
 URL: https://issues.apache.org/jira/browse/HDFS-16797
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Affects Versions: 3.4.0
Reporter: fanshilun
Assignee: fanshilun






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org