[jira] [Commented] (HDFS-16854) TestDFSIO to support non-default file system

2022-12-02 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17642740#comment-17642740
 ] 

Mingliang Liu commented on HDFS-16854:
--

Yeah it’s not straightforward to use -fs along with data dir in this case.

> TestDFSIO to support non-default file system
> 
>
> Key: HDFS-16854
> URL: https://issues.apache.org/jira/browse/HDFS-16854
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> TestDFSIO expects a parameter {{-Dtest.build.data=}} which is where the data 
> is located. Only paths on the default file system is supported. Running t 
> against other file systems, such as Ozone, throws an exception.
> It can be worked around by specifying {{-Dfs.defaultFS=}} but it would be 
> even nicer to support non-default file systems out of box, because no one 
> would know this trick unless she looks at the code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16854) TestDFSIO to support non-default file system

2022-11-24 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17638435#comment-17638435
 ] 

Mingliang Liu commented on HDFS-16854:
--

If it implements {{Tool}}, it is assumed to use {{ToolRunner}} to parse generic 
options including {{-fs  or }}? This overrides 
the {{fs.defaultFS}} property and is more "standard" to the conventions of 
Hadoop tools.

> TestDFSIO to support non-default file system
> 
>
> Key: HDFS-16854
> URL: https://issues.apache.org/jira/browse/HDFS-16854
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> TestDFSIO expects a parameter {{-Dtest.build.data=}} which is where the data 
> is located. Only paths on the default file system is supported. Running t 
> against other file systems, such as Ozone, throws an exception.
> It can be worked around by specifying {{-Dfs.defaultFS=}} but it would be 
> even nicer to support non-default file systems out of box, because no one 
> would know this trick unless she looks at the code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16545) Provide option to balance rack level in Balancer

2022-04-18 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17523572#comment-17523572
 ] 

Mingliang Liu commented on HDFS-16545:
--

Rack-wide balancer could cause some useless data movement if next balancer run 
is cluster-wide. How to address that problem? Also do we plan to allow multiple 
rack-wide balancers (different racks)?

> Provide option to balance rack level in Balancer
> 
>
> Key: HDFS-16545
> URL: https://issues.apache.org/jira/browse/HDFS-16545
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Minor
>
> Currently Balancer tool run on entire cluster and balance across the racks. 
> In we need to balance within a rack, then need to provide an option to 
> support the rack level balancing.
> [~surendralilhore] [~hemanthboyina] 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14786) A new block placement policy tolerating availability zone failure

2022-04-13 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17521952#comment-17521952
 ] 

Mingliang Liu commented on HDFS-14786:
--

[~panlijie] I did not have chance to work on this so I did not assign to me. 
Feel free to pick it up if you are interested. It became more interesting when 
EKS supports placement group. My previous projects later explore S3 as long 
term storage tier for that highly critical use case (HBase/Phoenix).

> A new block placement policy tolerating availability zone failure
> -
>
> Key: HDFS-14786
> URL: https://issues.apache.org/jira/browse/HDFS-14786
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement
>Reporter: Mingliang Liu
>Priority: Major
>
> {{NetworkTopology}} assumes "/datacenter/rack/host" 3 layer topology. Default 
> block placement policies are rack awareness for better fault tolerance. Newer 
> block placement policy like {{BlockPlacementPolicyRackFaultTolerant}} tries 
> its best to place the replicas to most racks, which further tolerates more 
> racks failing. HADOOP-8470 brought {{NetworkTopologyWithNodeGroup}} to add 
> another layer under rack, i.e. "/datacenter/rack/host/nodegroup" 4 layer 
> topology. With that, replicas within a rack can be placed in different node 
> groups for better isolation.
> Existing block placement policies tolerate one rack failure since at least 
> two racks are chosen in those cases. Chances are all replicas could be placed 
> in the same datacenter, though there are multiple data centers in the same 
> cluster topology. In other words, fault of higher layers beyond rack is not 
> well tolerated.
> However, more deployments in public cloud are leveraging multiple available 
> zones (AZ) for high-availability since the inter-AZ latency seems affordable 
> in many cases. In a single AZ, some cloud providers like AWS support 
> [partitioned placement 
> groups|https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#placement-groups-partition]
>  which basically are different racks. A simple network topology mapped to 
> HDFS is "/availabilityzone/rack/host" 3 layers.
> To achieve high availability tolerating zone failure, this JIRA proposes a 
> new data placement policy which tries its best to place replicas in most AZs, 
> most racks, and most evenly distributed.
> Examples with 3 replicas, we choose racks as following:
>  - 1AZ: fall back to {{BlockPlacementPolicyRackFaultTolerant}} to place among 
> most racks
>  - 2AZ: randomly choose one rack in one AZ and randomly choose two racks in 
> the other AZ
>  - 3AZ: randomly choose one rack in every AZ
>  - 4AZ: randomly choose three AZs and randomly choose one rack in every AZ
> After racks are picked, hosts are chosen randomly within racks honoring local 
> storage, favorite nodes, excluded nodes, storage types etc. Data may become 
> imbalance if topology is very uneven in AZs. This seems not a problem as in 
> public cloud, infrastructure provisioning is more flexible than 1P.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16143) TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits is flaky

2021-08-26 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-16143:
-
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits is flaky
> -
>
> Key: HDFS-16143
> URL: https://issues.apache.org/jira/browse/HDFS-16143
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>
>  Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3229/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
> {quote}
> [ERROR] 
> testStandbyTriggersLogRollsWhenTailInProgressEdits[0](org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer)
>   Time elapsed: 6.862 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:87)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at org.junit.Assert.assertTrue(Assert.java:53)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testStandbyTriggersLogRollsWhenTailInProgressEdits(TestEditLogTailer.java:444)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-23 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17330913#comment-17330913
 ] 

Mingliang Liu commented on HDFS-15982:
--

With the support of optional skipTrash I think this makes more sense. As there 
are very related reviews on [HDFS-14320], I will defer to [~kpalanisamy] 
[~weichiu] and [~daryn] for review/comments how we can move forward.

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop

2021-04-22 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17329967#comment-17329967
 ] 

Mingliang Liu commented on HDFS-15624:
--

Thanks [~weichiu]. I'm fine with your proposal. I hope 3.3.1 released can be 
unblocked soon.

>  Fix the SetQuotaByStorageTypeOp problem after updating hadoop 
> ---
>
> Key: HDFS-15624
> URL: https://issues.apache.org/jira/browse/HDFS-15624
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: YaYun Wang
>Assignee: huangtianhua
>Priority: Major
>  Labels: pull-request-available, release-blocker
> Fix For: 3.4.0
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> HDFS-15025 adds a new storage Type NVDIMM, changes the ordinal() of the enum 
> of StorageType. And, setting the quota by storageType depends on the 
> ordinal(), therefore, it may cause the setting of quota to be invalid after 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15982) Deleted data on the Web UI must be saved to the trash

2021-04-22 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17329927#comment-17329927
 ] 

Mingliang Liu commented on HDFS-15982:
--

Let's rename the JIRA subject and also PR by replacing "Web UI" with "HTTP 
API". HDFS "Web UI" is usually about the web portal that one can browse for 
information purpose. This JIRA is to change the "RESTful HTTP API", not about 
the Web UI.

My only concern about this is that, the "Trash" concept is not a part of the 
FileSystem DELETE API. Changing this behavior may break existing applications 
that assumes storage will be released. It seems counter-intuitive that one can 
skipTrash from command line but can not using WebHDFS. Since keeping data in 
Trash for a while is usually a good idea, I think I'm fine with this feature 
proposal. Ideally we can expose -skipTrash parameter so users can choose. 
Meanwhile the default value should be true for all existing released branches 
(<=3.3) to make it backward-compatible. We can change default value from 3.4 
though to make it enabled by default.

When I explore I found [[HDFS-14320]] is all about the same idea and similar 
implementation. Do you guys want to post there and try with a collaboration to 
get this in? I did not look into that closely.

CC: [~vjasani] [~bpatel]

> Deleted data on the Web UI must be saved to the trash 
> --
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
>  
> This can be helpful when the user accidentally deletes data from the Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15938) Fix java doc in FSEditLog

2021-04-01 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu resolved HDFS-15938.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Fix java doc in FSEditLog
> -
>
> Key: HDFS-15938
> URL: https://issues.apache.org/jira/browse/HDFS-15938
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Fix java doc in 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog#logAddCacheDirectiveInfo.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15931) Fix non-static inner classes for better memory management

2021-04-01 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15931:
-
Fix Version/s: 3.2.3
   2.10.2
   3.1.5
   3.4.0
   3.3.1
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to all fixed versions. Thanks!

> Fix non-static inner classes for better memory management
> -
>
> Key: HDFS-15931
> URL: https://issues.apache.org/jira/browse/HDFS-15931
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 2.10.2, 3.2.3
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> If an inner class does not need to reference its enclosing instance, it can 
> be static. This prevents a common cause of memory leaks and uses less memory 
> per instance of the enclosing class.
> Came across DataNodeProperties as a non static inner class defined in 
> MiniDFSCluster without holding any implicit reference to MiniDFSCluster. 
> Taking this opportunity to find other non-static inner classes that are not 
> holding implicit reference to their respective enclosing instances.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-24 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15911:
-
Component/s: balancer & mover

> Provide blocks moved count in Balancer iteration result
> ---
>
> Key: HDFS-15911
> URL: https://issues.apache.org/jira/browse/HDFS-15911
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Balancer provides Result for iteration and it contains info like exitStatus, 
> bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved 
> count from NameNodeConnector and print it with rest of details in 
> Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-24 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15911:
-
Fix Version/s: 3.2.3
   3.1.5
   3.4.0
   3.3.1
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged to all Hadoop 3 branches. Thanks!

> Provide blocks moved count in Balancer iteration result
> ---
>
> Key: HDFS-15911
> URL: https://issues.apache.org/jira/browse/HDFS-15911
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Balancer provides Result for iteration and it contains info like exitStatus, 
> bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved 
> count from NameNodeConnector and print it with rest of details in 
> Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15904) Flaky test TestBalancer#testBalancerWithSortTopNodes()

2021-03-19 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15904:
-
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thank you for your contribution [~vjasani]

Thank you for your review [~ayushtkn]

> Flaky test TestBalancer#testBalancerWithSortTopNodes()
> --
>
> Key: HDFS-15904
> URL: https://issues.apache.org/jira/browse/HDFS-15904
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> TestBalancer#testBalancerWithSortTopNodes shows some flakes in around ~10 
> runs or so. It's reproducible locally also. Basically, balancing either moves 
> 2 blocks of size 100+100 bytes or it moves 3 blocks of size 100+100+50 bytes 
> (2nd case causes flakies).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15904) Flaky test TestBalancer#testBalancerWithSortTopNodes()

2021-03-19 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15904:
-
Component/s: balancer & mover

> Flaky test TestBalancer#testBalancerWithSortTopNodes()
> --
>
> Key: HDFS-15904
> URL: https://issues.apache.org/jira/browse/HDFS-15904
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: balancer & mover
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> TestBalancer#testBalancerWithSortTopNodes shows some flakes in around ~10 
> runs or so. It's reproducible locally also. Basically, balancing either moves 
> 2 blocks of size 100+100 bytes or it moves 3 blocks of size 100+100+50 bytes 
> (2nd case causes flakies).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15904) Flaky test TestBalancer#testBalancerWithSortTopNodes()

2021-03-19 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17305296#comment-17305296
 ] 

Mingliang Liu edited comment on HDFS-15904 at 3/20/21, 3:43 AM:


Committed to Hadoop 3

[~vjasani] -Could you open a new branch-2.10 PR for this Jira ? Thanks,-

Never mind, just noticed the test is only for 3.4. Resolving.


was (Author: liuml07):
Committed to Hadoop 3.

[~vjasani] Could you open a new branch-2.10 PR for this Jira ? Thanks,

> Flaky test TestBalancer#testBalancerWithSortTopNodes()
> --
>
> Key: HDFS-15904
> URL: https://issues.apache.org/jira/browse/HDFS-15904
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> TestBalancer#testBalancerWithSortTopNodes shows some flakes in around ~10 
> runs or so. It's reproducible locally also. Basically, balancing either moves 
> 2 blocks of size 100+100 bytes or it moves 3 blocks of size 100+100+50 bytes 
> (2nd case causes flakies).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15904) Flaky test TestBalancer#testBalancerWithSortTopNodes()

2021-03-19 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17305296#comment-17305296
 ] 

Mingliang Liu commented on HDFS-15904:
--

Committed to Hadoop 3.

[~vjasani] Could you open a new branch-2.10 PR for this Jira ? Thanks,

> Flaky test TestBalancer#testBalancerWithSortTopNodes()
> --
>
> Key: HDFS-15904
> URL: https://issues.apache.org/jira/browse/HDFS-15904
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> TestBalancer#testBalancerWithSortTopNodes shows some flakes in around ~10 
> runs or so. It's reproducible locally also. Basically, balancing either moves 
> 2 blocks of size 100+100 bytes or it moves 3 blocks of size 100+100+50 bytes 
> (2nd case causes flakies).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15904) Flaky test TestBalancer#testBalancerWithSortTopNodes()

2021-03-18 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17304642#comment-17304642
 ] 

Mingliang Liu commented on HDFS-15904:
--

Approved PR and left some minor message.

Not sure about HBase, but in Hadoop, before merging we only need to set target 
versions for a JIRA. When committing, the commuter will set the "Fixed 
Versions" to indicate which branch this patch eventually goes into. Thanks,

> Flaky test TestBalancer#testBalancerWithSortTopNodes()
> --
>
> Key: HDFS-15904
> URL: https://issues.apache.org/jira/browse/HDFS-15904
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> TestBalancer#testBalancerWithSortTopNodes shows some flakes in around ~10 
> runs or so. It's reproducible locally also. Basically, balancing either moves 
> 2 blocks of size 100+100 bytes or it moves 3 blocks of size 100+100+50 bytes 
> (2nd case causes flakies).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15904) Flaky test TestBalancer#testBalancerWithSortTopNodes()

2021-03-18 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17304642#comment-17304642
 ] 

Mingliang Liu edited comment on HDFS-15904 at 3/19/21, 5:11 AM:


Approved PR and left some minor message.

Not sure about HBase, but in Hadoop, before merging we only need to set target 
versions for a JIRA. When committing, the committer will set the "Fixed 
Versions" to indicate which branch this patch eventually goes into. Thanks,


was (Author: liuml07):
Approved PR and left some minor message.

Not sure about HBase, but in Hadoop, before merging we only need to set target 
versions for a JIRA. When committing, the commuter will set the "Fixed 
Versions" to indicate which branch this patch eventually goes into. Thanks,

> Flaky test TestBalancer#testBalancerWithSortTopNodes()
> --
>
> Key: HDFS-15904
> URL: https://issues.apache.org/jira/browse/HDFS-15904
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> TestBalancer#testBalancerWithSortTopNodes shows some flakes in around ~10 
> runs or so. It's reproducible locally also. Basically, balancing either moves 
> 2 blocks of size 100+100 bytes or it moves 3 blocks of size 100+100+50 bytes 
> (2nd case causes flakies).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15904) Flaky test TestBalancer#testBalancerWithSortTopNodes()

2021-03-18 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15904:
-
Fix Version/s: (was: 3.4.0)
   Status: Patch Available  (was: Open)

> Flaky test TestBalancer#testBalancerWithSortTopNodes()
> --
>
> Key: HDFS-15904
> URL: https://issues.apache.org/jira/browse/HDFS-15904
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> TestBalancer#testBalancerWithSortTopNodes shows some flakes in around ~10 
> runs or so. It's reproducible locally also. Basically, balancing either moves 
> 2 blocks of size 100+100 bytes or it moves 3 blocks of size 100+100+50 bytes 
> (2nd case causes flakies).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop

2021-02-02 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu resolved HDFS-15624.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk branch. Thank you [~huangtianhua] and su xu for your 
contribution. Thank you [~ayushtkn] and [~vinayakumarb] for your helpful review.

>  Fix the SetQuotaByStorageTypeOp problem after updating hadoop 
> ---
>
> Key: HDFS-15624
> URL: https://issues.apache.org/jira/browse/HDFS-15624
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: YaYun Wang
>Assignee: huangtianhua
>Priority: Major
>  Labels: pull-request-available, release-blocker
> Fix For: 3.4.0
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> HDFS-15025 adds a new storage Type NVDIMM, changes the ordinal() of the enum 
> of StorageType. And, setting the quota by storageType depends on the 
> ordinal(), therefore, it may cause the setting of quota to be invalid after 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop

2021-02-02 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HDFS-15624:


Assignee: YaYun Wang

>  Fix the SetQuotaByStorageTypeOp problem after updating hadoop 
> ---
>
> Key: HDFS-15624
> URL: https://issues.apache.org/jira/browse/HDFS-15624
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available, release-blocker
>  Time Spent: 9h 10m
>  Remaining Estimate: 0h
>
> HDFS-15025 adds a new storage Type NVDIMM, changes the ordinal() of the enum 
> of StorageType. And, setting the quota by storageType depends on the 
> ordinal(), therefore, it may cause the setting of quota to be invalid after 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop

2021-02-02 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HDFS-15624:


Assignee: huangtianhua  (was: YaYun Wang)

>  Fix the SetQuotaByStorageTypeOp problem after updating hadoop 
> ---
>
> Key: HDFS-15624
> URL: https://issues.apache.org/jira/browse/HDFS-15624
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: YaYun Wang
>Assignee: huangtianhua
>Priority: Major
>  Labels: pull-request-available, release-blocker
>  Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> HDFS-15025 adds a new storage Type NVDIMM, changes the ordinal() of the enum 
> of StorageType. And, setting the quota by storageType depends on the 
> ordinal(), therefore, it may cause the setting of quota to be invalid after 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15660) StorageTypeProto is not compatiable between 3.x and 2.6

2020-12-03 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17243586#comment-17243586
 ] 

Mingliang Liu commented on HDFS-15660:
--

It makes sense to me. I do not have time to validate it this week/month. If 
anyone else can help put a second review, that will be great! CC: 
[~vinayakumarb] [~linyiqun]

> StorageTypeProto is not compatiable between 3.x and 2.6
> ---
>
> Key: HDFS-15660
> URL: https://issues.apache.org/jira/browse/HDFS-15660
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.2.0, 3.1.3
>Reporter: Ryan Wu
>Assignee: Ryan Wu
>Priority: Major
> Attachments: HDFS-15660.002.patch, HDFS-15660.003.patch
>
>
> In our case, when nn has upgraded to 3.1.3 and dn’s version was still 2.6,  
> we found hive to call getContentSummary method , the client and server was 
> not compatible  because of hadoop3 added new PROVIDED storage type.
> {code:java}
> // code placeholder
> 20/04/15 14:28:35 INFO retry.RetryInvocationHandler---main: Exception while 
> invoking getContentSummary of class ClientNamenodeProtocolTranslatorPB over 
> x/x:8020. Trying to fail over immediately.
> java.io.IOException: com.google.protobuf.ServiceException: 
> com.google.protobuf.UninitializedMessageException: Message missing required 
> fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
>         at 
> org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getContentSummary(ClientNamenodeProtocolTranslatorPB.java:819)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>         at com.sun.proxy.$Proxy11.getContentSummary(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.DFSClient.getContentSummary(DFSClient.java:3144)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:706)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
>         at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getContentSummary(DistributedFileSystem.java:713)
>         at org.apache.hadoop.fs.shell.Count.processPath(Count.java:109)
>         at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
>         at 
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
>         at 
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
>         at 
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
>         at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
>         at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
>         at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>         at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> Caused by: com.google.protobuf.ServiceException: 
> com.google.protobuf.UninitializedMessageException: Message missing required 
> fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:272)
>         at com.sun.proxy.$Proxy10.getContentSummary(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getContentSummary(ClientNamenodeProtocolTranslatorPB.java:816)
>         ... 23 more
> Caused by: com.google.protobuf.UninitializedMessageException: Message missing 
> required fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
>         at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetContentSummaryResponseProto$Builder.build(ClientNamenodeProtocolProtos.java:65392)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetContentSummaryResponseProto$Builder.build(ClientNamenodeProtocolProtos.java:65331)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invo

[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-27 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202757#comment-17202757
 ] 

Mingliang Liu commented on HDFS-15025:
--

[~wangyayun] Thanks for taking a look!

The topic is not only about the test failure, which can be fixed either way. 
The more important question is about compatibility since FSImage may depend on 
the ordinal and hence existing data can not work with this patch. It seems even 
more concerning if it fails silently by parsing existing storage types wrongly. 
To fix it, we first need to keep the existing ordinal by moving NVDIMM to the 
end and updating the comment. Secondly we need to make sure adding isRAM does 
not break compatibility either. I assume it does not but it would be great if 
we can confirm. That’s why I suggested we need to (manual) test more about this 
for compatibility.

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-26 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15025:
-
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
1s{color} | {color:blue} prototool was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
1s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
47s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 19m 58s{color} | 
{color:red} root generated 19 new + 143 unchanged - 19 fixed = 162 total (was 
162) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 23s{color} | {color:orange} root: The patch generated 3 new + 726 unchanged 
- 4 fixed = 729 total (was 730) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
20s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}133m 27s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | 

[jira] [Issue Comment Deleted] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-26 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15025:
-
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
8s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 17m 32s{color} | 
{color:red} root generated 25 new + 137 unchanged - 25 fixed = 162 total (was 
162) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 59s{color} | {color:orange} root: The patch generated 3 new + 725 unchanged 
- 4 fixed = 728 total (was 729) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
36s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 49s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | 

[jira] [Issue Comment Deleted] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-26 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15025:
-
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
38s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 21m 25s{color} | 
{color:red} root generated 21 new + 141 unchanged - 21 fixed = 162 total (was 
162) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  3s{color} | {color:orange} root: The patch generated 4 new + 725 unchanged 
- 4 fixed = 729 total (was 729) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
11s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 46s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | 

[jira] [Issue Comment Deleted] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-26 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15025:
-
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
25m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
12s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 20m 30s{color} | 
{color:red} root generated 28 new + 134 unchanged - 28 fixed = 162 total (was 
162) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 29s{color} | {color:orange} root: The patch generated 3 new + 725 unchanged 
- 4 fixed = 728 total (was 729) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 21s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | 

[jira] [Issue Comment Deleted] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-26 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15025:
-
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
1s{color} | {color:blue} prototool was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
1s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
10s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 16m 41s{color} | 
{color:red} root generated 40 new + 122 unchanged - 40 fixed = 162 total (was 
162) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 52s{color} | {color:orange} root: The patch generated 3 new + 725 unchanged 
- 4 fixed = 728 total (was 729) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}135m 58s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | 

[jira] [Issue Comment Deleted] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-26 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15025:
-
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
1s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
23m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
26s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 18m  8s{color} | 
{color:red} root generated 24 new + 138 unchanged - 24 fixed = 162 total (was 
162) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 59s{color} | {color:orange} root: The patch generated 3 new + 725 unchanged 
- 4 fixed = 728 total (was 729) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 49s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}131m 49s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:g

[jira] [Issue Comment Deleted] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-26 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15025:
-
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-15025 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-15025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12989371/NVDIMM_patch%28WIP%29.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28834/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.

)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-26 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15025:
-
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
59s{color} | {color:green} trunk passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} The patch fails to run checkstyle in root 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdfs-client in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 23m 
20s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 23m 20s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 23m 20s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} The patch fails to run checkstyle in root 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
46s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
0m 42s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 38s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m 
40s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m  5s{color} | 
{color:black} 

[jira] [Issue Comment Deleted] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-26 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15025:
-
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
11s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 18m 46s{color} | 
{color:red} root generated 26 new + 136 unchanged - 26 fixed = 162 total (was 
162) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  5s{color} | {color:orange} root: The patch generated 23 new + 726 unchanged 
- 3 fixed = 749 total (was 729) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 28s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
9s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 39s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:

[jira] [Issue Comment Deleted] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-26 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15025:
-
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
52s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 23m 22s{color} | 
{color:red} root generated 26 new + 136 unchanged - 26 fixed = 162 total (was 
162) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 23m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 48s{color} | {color:orange} root: The patch generated 13 new + 725 unchanged 
- 4 fixed = 738 total (was 729) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 9 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 25s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
8s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 49s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflic

[jira] [Updated] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-26 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15025:
-
Release Note: Add a new storage type NVDIMM and a new storage policy 
ALL_NVDIMM for HDFS. The NVDIMM storage type is for non-volatile random-access 
memory storage medias whose data survives when DataNode restarts.  (was: Adding 
the new storage media NVDIMM and ALL_NVDIMM storage policy  on HDFS,including 
the test code for them)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-26 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202543#comment-17202543
 ] 

Mingliang Liu edited comment on HDFS-15025 at 9/26/20, 8:03 AM:


[~ayushsaxena] Yes if FSImage depends on the ordinal, we should keep all 
existing ones’ ordinal and update the comment in code since the order is not 
from fast to slow any more. I don’t intend to support for breaking any FSImage 
compatibility in Hadoop 3. This is a really good point. We should fix this. Not 
sure if revert and recommit is better, but a follow up change is also fine to 
me considering this is never released. Thanks!

Given this was omitted originally, [~wangyayun] how about some manual test with 
old FSImage and new code?

-I just noticed this JIRA's release notes do not include how to use this 
feature.- 

CC [~brahma]


was (Author: liuml07):
[~ayushsaxena] Yes if FSImage depends on the ordinal, we should keep all 
existing ones’ ordinal and update the comment in code since the order is not 
from fast to slow any more. I don’t intend to support for breaking any FSImage 
compatibility in Hadoop 3. This is a really good point. We should fix this. Not 
sure if revert and recommit is better, but a follow up change is also fine to 
me considering this is never released. Thanks!

Given this was omitted originally, [~wangyayun]  how about  some manual test 
with old FSImage and new code?

I just noticed this JIRA has no release notes. Please add one [~wangyayun] CC 
[~brahma] 

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-26 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202543#comment-17202543
 ] 

Mingliang Liu commented on HDFS-15025:
--

[~ayushsaxena] Yes if FSImage depends on the ordinal, we should keep all 
existing ones’ ordinal and update the comment in code since the order is not 
from fast to slow any more. I don’t intend to support for breaking any FSImage 
compatibility in Hadoop 3. This is a really good point. We should fix this. Not 
sure if revert and recommit is better, but a follow up change is also fine to 
me considering this is never released. Thanks!

Given this was omitted originally, [~wangyayun]  how about  some manual test 
with old FSImage and new code?

I just noticed this JIRA has no release notes. Please add one [~wangyayun] CC 
[~brahma] 

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15600) TestRouterQuota fails in trunk

2020-09-25 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202477#comment-17202477
 ] 

Mingliang Liu commented on HDFS-15600:
--

As discussed in [this 
comment,|https://issues.apache.org/jira/browse/HDFS-15025?focusedCommentId=17202476&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17202476]
 I'm +1 on the idea to fix. Thanks,

> TestRouterQuota fails in trunk
> --
>
> Key: HDFS-15600
> URL: https://issues.apache.org/jira/browse/HDFS-15600
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Ayush Saxena
>Priority: Major
>
> The test is failing due to addition of a new storage type {{NVDIMM}} in 
> middle.
> Ref :
> https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/204/testReport/org.apache.hadoop.hdfs.server.federation.router/TestRouterQuota/testStorageTypeQuota/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-25 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202476#comment-17202476
 ] 

Mingliang Liu edited comment on HDFS-15025 at 9/26/20, 1:07 AM:


[~ayushtkn] This is a good question.

First, I did not see code that depends on the ordinal of the enums, given users 
configure disks with storage name and set the storage policy for directories. 
The existing disk type names and storage polices kept their name and ordinal. 
So I was not thinking this was a "Incompatible" change - even it adds new 
fields ({{isRam}}) to this class. Meanwhile, in the code comment, it says the 
type is sorted by speed, not fixed ordinal.
{code}
@InterfaceStability.Unstable
public enum StorageType {
  // sorted by the speed of the storage types, from fast to slow
{code}

Second, I was not aware of the test failure in multiple previous QA runs of the 
patch (in the pull request). I did not check the last QA run, but would be glad 
if we can find out why this was not reported in PreCommit runs. [~wangyayun] 
Would have a look the test failure and/or why it was not reported previously? I 
glimpsed and now think the test just because it makes assumptions about the 
ordinal - which makes sense as the {{quote}} array was covering complete types 
previously. I think your proposal works great and fixes the test, so +1 on the 
idea to fix.

CC: [~brahmareddy]


was (Author: liuml07):
[~ayushtkn] This is good question.

First, I did not see code that depends on the ordinal of the enums, given users 
configure disks with storage name and set the storage policy for directories. 
The existing disk type names and storage polices kept their name and ordinal. 
So I was not thinking this was a "Incompatible" change - even it adds new 
fields ({{isRam}}) to this class. Meanwhile, in the code comment, it says the 
type is sorted by speed, not fixed ordinal.
{code}
@InterfaceStability.Unstable
public enum StorageType {
  // sorted by the speed of the storage types, from fast to slow
{code}

Second, I was not aware of the test failure in multiple previous QA runs of the 
patch (in the pull request). I did not check the last QA run, but would be glad 
if we can find out why this was not reported in PreCommit runs. [~wangyayun] 
Would have a look? I glimpsed and think the test just because it makes 
assumptions about the ordinal. I think your proposal works just fine and fixes 
the test, so +1 on the idea.

CC: [~brahmareddy]

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-25 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17202476#comment-17202476
 ] 

Mingliang Liu commented on HDFS-15025:
--

[~ayushtkn] This is good question.

First, I did not see code that depends on the ordinal of the enums, given users 
configure disks with storage name and set the storage policy for directories. 
The existing disk type names and storage polices kept their name and ordinal. 
So I was not thinking this was a "Incompatible" change - even it adds new 
fields ({{isRam}}) to this class. Meanwhile, in the code comment, it says the 
type is sorted by speed, not fixed ordinal.
{code}
@InterfaceStability.Unstable
public enum StorageType {
  // sorted by the speed of the storage types, from fast to slow
{code}

Second, I was not aware of the test failure in multiple previous QA runs of the 
patch (in the pull request). I did not check the last QA run, but would be glad 
if we can find out why this was not reported in PreCommit runs. [~wangyayun] 
Would have a look? I glimpsed and think the test just because it makes 
assumptions about the ordinal. I think your proposal works just fine and fixes 
the test, so +1 on the idea.

CC: [~brahmareddy]

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Closed] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-24 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu closed HDFS-15595.


> TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
> 
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.4.0
>
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-24 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17201799#comment-17201799
 ] 

Mingliang Liu commented on HDFS-15595:
--

Because there is no code change for this JIRA, I close this JIRA with empty 
"Fix Version/s" so release manager does not need to look at this one. Thanks 
[~shashikant] for taking care of it.

> TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
> 
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Assignee: Shashikant Banerjee
>Priority: Major
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-24 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15595:
-
Fix Version/s: (was: 3.4.0)

> TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
> 
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Assignee: Shashikant Banerjee
>Priority: Major
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-09-24 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15025:
-
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15595) TestSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-23 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15595:
-
Summary: TestSnapshotCommands.testMaxSnapshotLimit fails in trunk  (was: 
stSnapshotCommands.testMaxSnapshotLimit fails in trunk)

> TestSnapshotCommands.testMaxSnapshotLimit fails in trunk
> 
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Priority: Major
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15595) stSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-23 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17201093#comment-17201093
 ] 

Mingliang Liu commented on HDFS-15595:
--

CC: [~shashikant] [~szetszwo] and [~msingh]


> stSnapshotCommands.testMaxSnapshotLimit fails in trunk
> --
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Priority: Major
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15595) stSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-23 Thread Mingliang Liu (Jira)
Mingliang Liu created HDFS-15595:


 Summary: stSnapshotCommands.testMaxSnapshotLimit fails in trunk
 Key: HDFS-15595
 URL: https://issues.apache.org/jira/browse/HDFS-15595
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, snapshots, test
Reporter: Mingliang Liu


See 
[this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
 for a sample error.

Sample error stack:
{quote}
Error Message
The real output is: createSnapshot: Failed to create snapshot: there are 
already 4 snapshot(s) and the per directory snapshot limit is 3
.
 It should contain: Failed to add snapshot: there are already 3 snapshot(s) and 
the max snapshot limit is 3
Stacktrace
java.lang.AssertionError: 
The real output is: createSnapshot: Failed to create snapshot: there are 
already 4 snapshot(s) and the per directory snapshot limit is 3
.
 It should contain: Failed to add snapshot: there are already 3 snapshot(s) and 
the max snapshot limit is 3
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
at 
org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{quote}

I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15595) stSnapshotCommands.testMaxSnapshotLimit fails in trunk

2020-09-23 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15595:
-
Target Version/s: 3.4.0

> stSnapshotCommands.testMaxSnapshotLimit fails in trunk
> --
>
> Key: HDFS-15595
> URL: https://issues.apache.org/jira/browse/HDFS-15595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, snapshots, test
>Reporter: Mingliang Liu
>Priority: Major
>
> See 
> [this|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2326/1/testReport/org.apache.hadoop.hdfs/TestSnapshotCommands/testMaxSnapshotLimit/]
>  for a sample error.
> Sample error stack:
> {quote}
> Error Message
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
> Stacktrace
> java.lang.AssertionError: 
> The real output is: createSnapshot: Failed to create snapshot: there are 
> already 4 snapshot(s) and the per directory snapshot limit is 3
> .
>  It should contain: Failed to add snapshot: there are already 3 snapshot(s) 
> and the max snapshot limit is 3
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.apache.hadoop.hdfs.DFSTestUtil.toolRun(DFSTestUtil.java:1934)
>   at org.apache.hadoop.hdfs.DFSTestUtil.FsShellRun(DFSTestUtil.java:1942)
>   at 
> org.apache.hadoop.hdfs.TestSnapshotCommands.testMaxSnapshotLimit(TestSnapshotCommands.java:130)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {quote}
> I can also reproduce this locally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-14 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17195661#comment-17195661
 ] 

Mingliang Liu commented on HDFS-15574:
--

V3 patch looks good to me. Thanks,

> Remove unnecessary sort of block list in DirectoryScanner
> -
>
> Key: HDFS-15574
> URL: https://issues.apache.org/jira/browse/HDFS-15574
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15574.001.patch, HDFS-15574.002.patch, 
> HDFS-15574.003.patch
>
>
> These lines of code in DirectoryScanner#scan(), obtain a snapshot of the 
> finalized blocks from memory, and then sort them, under the DN lock. However 
> the blocks are stored in a sorted structure (FoldedTreeSet) and hence the 
> sort should be unnecessary.
> {code}
>   final List bl = dataset.getFinalizedBlocks(bpid);
>   Collections.sort(bl); // Sort based on blockId
> {code}
> This Jira removes the sort, and renames the getFinalizedBlocks to 
> getSortedFinalizedBlocks to make the intent of the method more clear.
> Also added a test, just in case the underlying block structure is ever 
> changed to something unsorted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15573) Only log warning if considerLoad and considerStorageType are both true

2020-09-12 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17194670#comment-17194670
 ] 

Mingliang Liu commented on HDFS-15573:
--

Failing tests are not related.

> Only log warning if considerLoad and considerStorageType are both true
> --
>
> Key: HDFS-15573
> URL: https://issues.apache.org/jira/browse/HDFS-15573
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15573.001.patch
>
>
> When we implemented HDFS-15255, we added a log message to warn if both 
> dfs.namenode.read.considerLoad and dfs.namenode.read.considerStorageType were 
> set to true, as they cannot be used together.
> Somehow, we failed to wrap the log message in an IF statement, so it is 
> always printed incorrectly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15573) Only log warning if considerLoad and considerStorageType are both true

2020-09-12 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15573:
-
Fix Version/s: 3.4.0
   3.3.1
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}} and {{branch-3.3}} branches. Thanks for your 
contribution [~sodonnell]. Thanks for review [~leosun08].

> Only log warning if considerLoad and considerStorageType are both true
> --
>
> Key: HDFS-15573
> URL: https://issues.apache.org/jira/browse/HDFS-15573
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15573.001.patch
>
>
> When we implemented HDFS-15255, we added a log message to warn if both 
> dfs.namenode.read.considerLoad and dfs.namenode.read.considerStorageType were 
> set to true, as they cannot be used together.
> Somehow, we failed to wrap the log message in an IF statement, so it is 
> always printed incorrectly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15574) Remove unnecessary sort of block list in DirectoryScanner

2020-09-11 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17194618#comment-17194618
 ] 

Mingliang Liu commented on HDFS-15574:
--

+1

Could you also update the javadoc of the API along with the API rename? Test 
failures seem unrelated. Could you confirm, for. eg. run it locally with the 
patch?

> Remove unnecessary sort of block list in DirectoryScanner
> -
>
> Key: HDFS-15574
> URL: https://issues.apache.org/jira/browse/HDFS-15574
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15574.001.patch, HDFS-15574.002.patch
>
>
> These lines of code in DirectoryScanner#scan(), obtain a snapshot of the 
> finalized blocks from memory, and then sort them, under the DN lock. However 
> the blocks are stored in a sorted structure (FoldedTreeSet) and hence the 
> sort should be unnecessary.
> {code}
>   final List bl = dataset.getFinalizedBlocks(bpid);
>   Collections.sort(bl); // Sort based on blockId
> {code}
> This Jira removes the sort, and renames the getFinalizedBlocks to 
> getSortedFinalizedBlocks to make the intent of the method more clear.
> Also added a test, just in case the underlying block structure is ever 
> changed to something unsorted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15573) Only log warning if considerLoad and considerStorageType are both true

2020-09-11 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17194614#comment-17194614
 ] 

Mingliang Liu commented on HDFS-15573:
--

+1

Not sure why the QA did not get triggered. I manually created 
[one|https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/149/] and hopefully 
it will post results a couple hours later. Thanks,

> Only log warning if considerLoad and considerStorageType are both true
> --
>
> Key: HDFS-15573
> URL: https://issues.apache.org/jira/browse/HDFS-15573
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15573.001.patch
>
>
> When we implemented HDFS-15255, we added a log message to warn if both 
> dfs.namenode.read.considerLoad and dfs.namenode.read.considerStorageType were 
> set to true, as they cannot be used together.
> Somehow, we failed to wrap the log message in an IF statement, so it is 
> always printed incorrectly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15565) Remove the invalid code in the Balancer#doBalance() method.

2020-09-10 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17193774#comment-17193774
 ] 

Mingliang Liu commented on HDFS-15565:
--

Closing this issue and removing the fix versions.

> Remove the invalid code in the Balancer#doBalance() method.
> ---
>
> Key: HDFS-15565
> URL: https://issues.apache.org/jira/browse/HDFS-15565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15565.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In the Balancer#doBalance() method, an invalid line of code is added, as 
> follows:
>  static private int doBalance(Collection namenodes,
>  Collection nsIds, final BalancerParameters p, Configuration conf)
>  throws IOException, InterruptedException
> { ... System.out.println("Time Stamp Iteration# Bytes Already Moved Bytes 
> Left To Move Bytes Being Moved"); ... }
>  
> I think it was originally used for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Closed] (HDFS-15565) Remove the invalid code in the Balancer#doBalance() method.

2020-09-10 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu closed HDFS-15565.


> Remove the invalid code in the Balancer#doBalance() method.
> ---
>
> Key: HDFS-15565
> URL: https://issues.apache.org/jira/browse/HDFS-15565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15565.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In the Balancer#doBalance() method, an invalid line of code is added, as 
> follows:
>  static private int doBalance(Collection namenodes,
>  Collection nsIds, final BalancerParameters p, Configuration conf)
>  throws IOException, InterruptedException
> { ... System.out.println("Time Stamp Iteration# Bytes Already Moved Bytes 
> Left To Move Bytes Being Moved"); ... }
>  
> I think it was originally used for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15565) Remove the invalid code in the Balancer#doBalance() method.

2020-09-10 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15565:
-
Fix Version/s: (was: 3.3.0)

> Remove the invalid code in the Balancer#doBalance() method.
> ---
>
> Key: HDFS-15565
> URL: https://issues.apache.org/jira/browse/HDFS-15565
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15565.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In the Balancer#doBalance() method, an invalid line of code is added, as 
> follows:
>  static private int doBalance(Collection namenodes,
>  Collection nsIds, final BalancerParameters p, Configuration conf)
>  throws IOException, InterruptedException
> { ... System.out.println("Time Stamp Iteration# Bytes Already Moved Bytes 
> Left To Move Bytes Being Moved"); ... }
>  
> I think it was originally used for testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17186782#comment-17186782
 ] 

Mingliang Liu edited comment on HDFS-15546 at 8/28/20, 8:04 PM:


Thanks for good discussion [~jianghuazhu] and [~chaosun]

Since this is not committed to any branch, I have cleared the "Fix Version/s" 
and marked this Jira as "Not a Problem". "Resolved" with fix versions only 
applies to JIRAs when it has code change.


was (Author: liuml07):
Thanks for good discussion [~jianghuazhu] and [~chaosun]

Since this is not committed to any branch, I have cleared the "Fix Version/s" 
and marked this Jira as "Not a Problem". A Jira which is "Resolved" with fix 
versions only apply to code change.

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Closed] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu closed HDFS-15546.


> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu resolved HDFS-15546.
--
Resolution: Not A Problem

Thanks for good discussion [~jianghuazhu] and [~chaosun]

Since this is not committed to any branch, I have cleared the "Fix Version/s" 
and marked this Jira as "Not a Problem". A Jira which is "Resolved" with fix 
versions only apply to code change.

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reopened HDFS-15546:
--

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15546) Remove duplicate initializeGenericKeys method when creating DFSZKFailoverController

2020-08-28 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15546:
-
Fix Version/s: (was: 3.3.0)

> Remove duplicate initializeGenericKeys method when creating 
> DFSZKFailoverController
> ---
>
> Key: HDFS-15546
> URL: https://issues.apache.org/jira/browse/HDFS-15546
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: jianghua zhu
>Assignee: jianghua zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15546.001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When calling the DFSZKFailoverController#create() method, call 
> NameNode#initializeGenericKeys() twice.
> First call:
> DFSZKFailoverController#create() {
> ...
> NameNode.initializeGenericKeys(localNNConf, nsId, nnId);
> ...
> }
> The second call:
> NNHAServiceTarget construction method:
> NNHAServiceTarget() {
> ...
> NameNode.initializeGenericKeys(targetConf, nsId, nnId);
> ...
> }
> In fact, the parameters passed in the two calls are the same. Here you can 
> remove the first call without affecting the program itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15527) Error On adding new Namespace

2020-08-12 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17176544#comment-17176544
 ] 

Mingliang Liu commented on HDFS-15527:
--

As you errors shows, did you format the NN first? Also did you follow the full 
documentation like 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/Federation.html?
 This overall does not look like a bug unless you provide more context. For 
general usage questions, please send email to u...@hadoop.apache.org

> Error On adding new Namespace
> -
>
> Key: HDFS-15527
> URL: https://issues.apache.org/jira/browse/HDFS-15527
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, ha, nn
>Affects Versions: 3.0.0
>Reporter: Thangamani Murugasamy
>Priority: Blocker
>
> We have one namespace, trying to add other one, always getting below error 
> message. 
>  
> The new name nodes never be part of existing name space, also don't see any 
> "nn" directories before adding name space.
>  
> 2020-08-12 04:59:53,947 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: NameNode is not formatted.
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:237)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:950)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:929)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
> 2020-08-12 04:59:53,955 DEBUG 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Closing log when already 
> closed
> ==
>  
>  
> 2020-08-12 04:59:53,976 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> java.io.IOException: NameNode is not formatted.
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:237)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:950)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:929)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
> 2020-08-12 04:59:53,978 DEBUG org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: java.io.IOException: NameNode is not formatted.
> 1: java.io.IOException: NameNode is not formatted.
>  at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1726)
> Caused by: java.io.IOException: NameNode is not formatted.
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:237)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1084)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:709)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:950)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:929)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
> 2020-08-12 04:59:53,979 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: java.io.IOException: NameNode is not formatted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15520) Use visitor pattern to visit namespace tree

2020-08-11 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175685#comment-17175685
 ] 

Mingliang Liu commented on HDFS-15520:
--

Let's link PRs to JIRAs manually if they are not showing automatically?

> Use visitor pattern to visit namespace tree
> ---
>
> Key: HDFS-15520
> URL: https://issues.apache.org/jira/browse/HDFS-15520
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
> Fix For: 3.4.0
>
>
> In order to allow the FsImageValidation tool to verify the namespace 
> structure, we use a visitor pattern so that the tool can visit all the INodes 
> and all the snapshots in the namespace tree.
> The existing INode.dumpTreeRecursively() can also be implemented by a visitor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15514) Remove useless dfs.webhdfs.enabled

2020-08-06 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17172591#comment-17172591
 ] 

Mingliang Liu commented on HDFS-15514:
--

+1

The target version should be all 3.1+ versions I guess?

> Remove useless dfs.webhdfs.enabled
> --
>
> Key: HDFS-15514
> URL: https://issues.apache.org/jira/browse/HDFS-15514
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-15514.001.patch
>
>
> After HDFS-7985 & HDFS-8349, " dfs.webhdfs.enabled" is useless. We should 
> remove it from code base.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15288) Add Available Space Rack Fault Tolerant BPP

2020-08-05 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171585#comment-17171585
 ] 

Mingliang Liu commented on HDFS-15288:
--

Looks great. Thanks [~ayushtkn]

> Add Available Space Rack Fault Tolerant BPP
> ---
>
> Key: HDFS-15288
> URL: https://issues.apache.org/jira/browse/HDFS-15288
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15288-01.patch, HDFS-15288-02.patch, 
> HDFS-15288-03.patch, HDFS-15288-Addendum-01.patch
>
>
> The Present {{AvailableSpaceBlockPlacementPolicy}} extends the Default Block 
> Placement policy, which makes it apt for Replicated files. But not very 
> efficient for EC files, which by default use. 
> {{BlockPlacementPolicyRackFaultTolerant}}. So propose a to add new BPP having 
> similar optimization as ASBPP where as keeping the spread of Blocks to max 
> racks, i.e as RackFaultTolerantBPP.
> This could extend {{BlockPlacementPolicyRackFaultTolerant}}, rather than the 
> {{BlockPlacementPOlicyDefault}} like ASBPP and keep other logics of 
> optimization same as ASBPP



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15499) Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion

2020-08-04 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15499:
-
Fix Version/s: 3.4.0
   3.3.1
   2.10.1
   3.2.2
   3.1.4
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to all fixed versions. Thanks [~aajisaka] for review.

> Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion
> ---
>
> Key: HDFS-15499
> URL: https://issues.apache.org/jira/browse/HDFS-15499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.1.4, 3.2.2, 2.10.1, 3.3.1, 3.4.0
>
>
> In [HADOOP-14040] we use shaded aws-sdk uber-JAR for instead of s3 jar in 
> hadoop-project/pom.xml. After that, we should also update httpfs `pom.xml` 
> file to exclude the correct jar dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15288) Add Available Space Rack Fault Tolerant BPP

2020-08-04 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171182#comment-17171182
 ] 

Mingliang Liu commented on HDFS-15288:
--

Useful improvement! Do you mind adding a release notes into this JIRA since it 
brings new BPP as well as config changes? Thanks,

> Add Available Space Rack Fault Tolerant BPP
> ---
>
> Key: HDFS-15288
> URL: https://issues.apache.org/jira/browse/HDFS-15288
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15288-01.patch, HDFS-15288-02.patch, 
> HDFS-15288-03.patch, HDFS-15288-Addendum-01.patch
>
>
> The Present {{AvailableSpaceBlockPlacementPolicy}} extends the Default Block 
> Placement policy, which makes it apt for Replicated files. But not very 
> efficient for EC files, which by default use. 
> {{BlockPlacementPolicyRackFaultTolerant}}. So propose a to add new BPP having 
> similar optimization as ASBPP where as keeping the spread of Blocks to max 
> racks, i.e as RackFaultTolerantBPP.
> This could extend {{BlockPlacementPolicyRackFaultTolerant}}, rather than the 
> {{BlockPlacementPOlicyDefault}} like ASBPP and keep other logics of 
> optimization same as ASBPP



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15508) [JDK 11] Fix javadoc errors in hadoop-hdfs-rbf module

2020-08-04 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171173#comment-17171173
 ] 

Mingliang Liu commented on HDFS-15508:
--

+1

checkstyle is related but I think we need that line longer than 80 chars

HADOOP-17179 is resolved. Will this get a clean javadoc report?

> [JDK 11] Fix javadoc errors in hadoop-hdfs-rbf module
> -
>
> Key: HDFS-15508
> URL: https://issues.apache.org/jira/browse/HDFS-15508
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-15508.01.patch
>
>
> {noformat}
> [ERROR] 
> /Users/aajisaka/git/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/security/token/package-info.java:21:
>  error: reference not found
> [ERROR]  * Implementations should extend {@link 
> AbstractDelegationTokenSecretManager}.
> [ERROR] ^
> {noformat}
> Full error log: 
> https://gist.github.com/aajisaka/a7dde76a4ba2942f60bf6230ec9ed6e1
> How to reproduce the failure:
> * Remove {{true}} from pom.xml
> * Run {{mvn process-sources javadoc:javadoc-no-fork}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-08-04 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HDFS-15025:


Assignee: YaYun Wang

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: YaYun Wang
>Assignee: YaYun Wang
>Priority: Major
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15499) Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion

2020-08-03 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15499:
-
Status: Patch Available  (was: Open)

> Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion
> ---
>
> Key: HDFS-15499
> URL: https://issues.apache.org/jira/browse/HDFS-15499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>
> In [HADOOP-14040] we use shaded aws-sdk uber-JAR for instead of s3 jar in 
> hadoop-project/pom.xml. After that, we should also update httpfs `pom.xml` 
> file to exclude the correct jar dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15499) Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion

2020-08-03 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170483#comment-17170483
 ] 

Mingliang Liu commented on HDFS-15499:
--

I did not find {{aws-java-sdk-s3}} included in the httpfs module
{code:bash}
$ mvn clean dependency:tree -Dverbose | grep aws
$ echo $?
1
$
{code}

The reason is that, hadoop-common and hadoop-hdfs modules do not include this 
aws-java-sdk-s3 any longer. So this JIRA is to clean that up to avoid confusion.

> Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion
> ---
>
> Key: HDFS-15499
> URL: https://issues.apache.org/jira/browse/HDFS-15499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Reporter: Mingliang Liu
>Priority: Major
>
> In [HADOOP-14040] we use shaded aws-sdk uber-JAR for instead of s3 jar in 
> hadoop-project/pom.xml. After that, we should also update httpfs `pom.xml` 
> file to exclude the correct jar dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15499) Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion

2020-08-03 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HDFS-15499:


Assignee: Mingliang Liu

> Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion
> ---
>
> Key: HDFS-15499
> URL: https://issues.apache.org/jira/browse/HDFS-15499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Major
>
> In [HADOOP-14040] we use shaded aws-sdk uber-JAR for instead of s3 jar in 
> hadoop-project/pom.xml. After that, we should also update httpfs `pom.xml` 
> file to exclude the correct jar dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15499) Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion

2020-08-03 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15499:
-
Summary: Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion  (was: 
Exclude aws-java-sdk-bundle from httpfs pom.xml)

> Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion
> ---
>
> Key: HDFS-15499
> URL: https://issues.apache.org/jira/browse/HDFS-15499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Reporter: Mingliang Liu
>Priority: Major
>
> In [HADOOP-14040] we use shaded aws-sdk uber-JAR for instead of s3 jar in 
> hadoop-project/pom.xml. After that, we should also update httpfs `pom.xml` 
> file to exclude the correct jar dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-08-03 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170454#comment-17170454
 ] 

Mingliang Liu commented on HDFS-15025:
--

Overall the patch looks good. I do not have any major comments after a quick 
look.

1) Could you create a pull request (PR) to make review process easier? I'll 
check again later this week on the PR.
2) Do you mind changing your "Full name" in JIRA to your real name? I do not 
believe this is required, but most folks use real names. I can add you to the 
contributor list and assign this JIRA to you.
3) Could you update the ArchivalStorage.md file for detailed introduction?

Thanks!

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: hadoop_hdfs_hw
>Priority: Major
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-08-03 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170454#comment-17170454
 ] 

Mingliang Liu edited comment on HDFS-15025 at 8/3/20, 10:24 PM:


Overall the patch looks good. I do not have any major comments after a quick 
look.

1) Could you create a pull request (PR) to make review process easier? I'll 
check again later this week on the PR.
2) [~wangyayun] Do you mind changing your "Full name" in JIRA to your real 
name? I do not believe this is required, but most folks use real names. I can 
add you to the contributor list and assign this JIRA to you.
3) Could you update the ArchivalStorage.md file for detailed introduction?

Thanks!


was (Author: liuml07):
Overall the patch looks good. I do not have any major comments after a quick 
look.

1) Could you create a pull request (PR) to make review process easier? I'll 
check again later this week on the PR.
2) Do you mind changing your "Full name" in JIRA to your real name? I do not 
believe this is required, but most folks use real names. I can add you to the 
contributor list and assign this JIRA to you.
3) Could you update the ArchivalStorage.md file for detailed introduction?

Thanks!

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: hadoop_hdfs_hw
>Priority: Major
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15499) Exclude aws-java-sdk-bundle from httpfs pom.xml

2020-07-29 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15499:
-
Summary: Exclude aws-java-sdk-bundle from httpfs pom.xml  (was: Exclude 
aws-java-sdk-bundle from httpfs)

> Exclude aws-java-sdk-bundle from httpfs pom.xml
> ---
>
> Key: HDFS-15499
> URL: https://issues.apache.org/jira/browse/HDFS-15499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Reporter: Mingliang Liu
>Priority: Major
>
> In [HADOOP-14040] we use shaded aws-sdk uber-JAR for instead of s3 jar in 
> hadoop-project/pom.xml. After that, we should also update httpfs `pom.xml` 
> file to exclude the correct jar dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15499) Exclude aws-java-sdk-bundle from httpfs

2020-07-29 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15499:
-
Description: In [HADOOP-14040] we use shaded aws-sdk uber-JAR for instead 
of s3 jar in hadoop-project/pom.xml. After that, we should also update httpfs 
`pom.xml` file to exclude the correct jar dependency.  (was: In 
[[HADOOP-14040]] we use shaded aws-sdk uber-JAR for instead of s3 jar in 
hadoop-project/pom.xml. After that, we should update httpfs `pom.xml` to 
exclude the correct jar dependency.)

> Exclude aws-java-sdk-bundle from httpfs
> ---
>
> Key: HDFS-15499
> URL: https://issues.apache.org/jira/browse/HDFS-15499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs
>Reporter: Mingliang Liu
>Priority: Major
>
> In [HADOOP-14040] we use shaded aws-sdk uber-JAR for instead of s3 jar in 
> hadoop-project/pom.xml. After that, we should also update httpfs `pom.xml` 
> file to exclude the correct jar dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15499) Exclude aws-java-sdk-bundle from httpfs

2020-07-29 Thread Mingliang Liu (Jira)
Mingliang Liu created HDFS-15499:


 Summary: Exclude aws-java-sdk-bundle from httpfs
 Key: HDFS-15499
 URL: https://issues.apache.org/jira/browse/HDFS-15499
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: httpfs
Reporter: Mingliang Liu


In [[HADOOP-14040]] we use shaded aws-sdk uber-JAR for instead of s3 jar in 
hadoop-project/pom.xml. After that, we should update httpfs `pom.xml` to 
exclude the correct jar dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS

2020-07-21 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162285#comment-17162285
 ] 

Mingliang Liu commented on HDFS-15025:
--

Good feature! Are the failing tests related? I plan to review later this month 
if it is not already merged. Thanks,

> Applying NVDIMM storage media to HDFS
> -
>
> Key: HDFS-15025
> URL: https://issues.apache.org/jira/browse/HDFS-15025
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, hdfs
>Reporter: hadoop_hdfs_hw
>Priority: Major
> Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, 
> HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, 
> HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch
>
>
> The non-volatile memory NVDIMM is faster than SSD, it can be used 
> simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on 
> NVDIMM can not only improves the response rate of HDFS, but also ensure the 
> reliability of the data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15391) Standby NameNode due loads the corruption edit log, the service exits and cannot be restarted

2020-06-13 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17134657#comment-17134657
 ] 

Mingliang Liu commented on HDFS-15391:
--

If I'm reading it correctly, I think this is the same as HDFS-15175 and we can 
close this one as "Duplicate" and move all discussions there. So full context 
will be tracked together.

I agree with [~hexiaoqiao] that this very likely has something to do with async 
edit logging. [~daryn] may shed some insights here? A deep copy will take 
"snapshot" of current Block instances to avoid out of sync issues. So the patch 
(nested as comment in HDFS-15175) is promising. I'm not very sure if this is a 
bit expensive to do a deep copy every time.

Is it possible to create a unit test where the problem can be reproduced, given 
we have above theory? A hack one (not necessarily committed into code base) 
will be very helpful.

Thanks,

> Standby NameNode due loads the corruption edit log, the service exits and 
> cannot be restarted
> -
>
> Key: HDFS-15391
> URL: https://issues.apache.org/jira/browse/HDFS-15391
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: huhaiyang
>Priority: Critical
>
> In the cluster version 3.2.0 production environment,
>  We found that due to edit log corruption, Standby NameNode could not 
> properly load the Ediltog log, result in abnormal exit of the service and 
> failure to restart
> {noformat}
> The specific scenario is that Flink writes to HDFS(replication file), and in 
> the case of an exception to the write file, the following operations are 
> performed :
> 1.close file
> 2.open file
> 3.truncate file
> 4.append file
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15379) DataStreamer should reset thread interrupted state in createBlockOutputStream

2020-06-07 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17127534#comment-17127534
 ] 

Mingliang Liu commented on HDFS-15379:
--

Thanks [~pilchard] for filing this. I agree with [~brahmareddy]. Interruption 
is used as cancellation by external thread to close the streamer thread. If we 
swallow the interruption status here, cancellation policy will be invalid. I'm 
wondering, do you imply the interrupt is caused by timeout in 
{{SocketIOWithTimeout$SelectorPool::select}}? I'm not sure that is true.

> DataStreamer should reset thread interrupted state in createBlockOutputStream
> -
>
> Key: HDFS-15379
> URL: https://issues.apache.org/jira/browse/HDFS-15379
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient
>Affects Versions: 2.7.7, 3.1.3
>Reporter: ludun
>Assignee: ludun
>Priority: Major
> Attachments: HDFS-15379.001.patch, HDFS-15379.002.patch, 
> HDFS-15379.003.patch, HDFS-15379.004.patch
>
>
> In createBlockOutputStream if thread was interrupted becuase timeout to 
> conenct to DataNode.
> {code}2020-05-27 18:32:53,310 | DEBUG | Connecting to datanode 
> xx.xx.xx.xx:25009 | DataStreamer.java:251
> 2020-05-27 18:33:50,457 | INFO | Exception in createBlockOutputStream 
> blk_1115121199_41386360 | DataStreamer.java:1854
>  java.io.InterruptedIOException: Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/xx.xx.xx.xx:40370 
> remote=/xx.xx.xx.xx:25009]. 615000 millis timeout left.
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342)
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
>  at java.io.FilterInputStream.read(FilterInputStream.java:83)
>  at java.io.FilterInputStream.read(FilterInputStream.java:83)
>  at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:551)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1826)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1743)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718)
> {code}
> then abandonBlockrpc to namenode also failed due to interrupted exception 
> immediately.
> {code}2020-05-27 18:33:50,461 | DEBUG | Connecting to xx/xx.xx.xx.xx:25000 | 
> Client.java:814
> 2020-05-27 18:33:50,462 | DEBUG | Failed to connect to server: 
> xx/xx.xx.xx.xx:25000: try once and fail. | Client.java:956
>  java.nio.channels.ClosedByInterruptException
>  at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>  at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:659)
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>  at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
>  at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:720)
>  at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:823)
>  at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:436)
>  at org.apache.hadoop.ipc.Client.getConnection(Client.java:1613)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1444)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1397)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:234)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>  at com.sun.proxy.$Proxy10.abandonBlock(Unknown Source)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.abandonBlock(ClientNamenodeProtocolTranslatorPB.java:509)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> 

[jira] [Commented] (HDFS-15377) BlockScanner scans one part per round, expect full scans after several rounds

2020-06-03 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17124705#comment-17124705
 ] 

Mingliang Liu commented on HDFS-15377:
--

Did not check the patch yet. Just curious, does this work when 
{{dfs.bytes-per-checksum}} is configured?

> BlockScanner scans one part per round, expect full scans after several rounds
> -
>
> Key: HDFS-15377
> URL: https://issues.apache.org/jira/browse/HDFS-15377
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15377.002.patch, HDFS-15377.003.patch, 
> HDFS-15377.004.patch, HDFS-15377.004.patch
>
>
> For reducing disk IO, one block is separated to multiple parts, BlockScanner 
> scans only one part per round. Expect that after several rounds, the full 
> block should be scanned
> Add a new option "dfs.block.scanner.part.size". the maximum data size per 
> scan by the block scanner. this value should be the multiple of chunk size, 
> for example, 512, 1024, 4096 ...
>  Default value is -1, will disable partial scan.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15379) DataStreamer should reset thread interrupted state in createBlockOutputStream

2020-06-03 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17124697#comment-17124697
 ] 

Mingliang Liu commented on HDFS-15379:
--

Added you to contributor list and assigned this JIRA to you [~pilchard] I may 
take a look later this week, but feel free to ping other reviewer.

 

> DataStreamer should reset thread interrupted state in createBlockOutputStream
> -
>
> Key: HDFS-15379
> URL: https://issues.apache.org/jira/browse/HDFS-15379
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient
>Affects Versions: 2.7.7, 3.1.3
>Reporter: ludun
>Assignee: ludun
>Priority: Major
> Attachments: HDFS-15379.001.patch, HDFS-15379.002.patch, 
> HDFS-15379.003.patch, HDFS-15379.004.patch
>
>
> In createBlockOutputStream if thread was interrupted becuase timeout to 
> conenct to DataNode.
> {code}2020-05-27 18:32:53,310 | DEBUG | Connecting to datanode 
> xx.xx.xx.xx:25009 | DataStreamer.java:251
> 2020-05-27 18:33:50,457 | INFO | Exception in createBlockOutputStream 
> blk_1115121199_41386360 | DataStreamer.java:1854
>  java.io.InterruptedIOException: Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/xx.xx.xx.xx:40370 
> remote=/xx.xx.xx.xx:25009]. 615000 millis timeout left.
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342)
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
>  at java.io.FilterInputStream.read(FilterInputStream.java:83)
>  at java.io.FilterInputStream.read(FilterInputStream.java:83)
>  at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:551)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1826)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1743)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718)
> {code}
> then abandonBlockrpc to namenode also failed due to interrupted exception 
> immediately.
> {code}2020-05-27 18:33:50,461 | DEBUG | Connecting to xx/xx.xx.xx.xx:25000 | 
> Client.java:814
> 2020-05-27 18:33:50,462 | DEBUG | Failed to connect to server: 
> xx/xx.xx.xx.xx:25000: try once and fail. | Client.java:956
>  java.nio.channels.ClosedByInterruptException
>  at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>  at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:659)
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>  at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
>  at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:720)
>  at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:823)
>  at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:436)
>  at org.apache.hadoop.ipc.Client.getConnection(Client.java:1613)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1444)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1397)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:234)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>  at com.sun.proxy.$Proxy10.abandonBlock(Unknown Source)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.abandonBlock(ClientNamenodeProtocolTranslatorPB.java:509)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>  at com.sun.proxy.$Proxy11.abandonBlock(Unknown Source)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1748)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718)
> {code}


[jira] [Assigned] (HDFS-15379) DataStreamer should reset thread interrupted state in createBlockOutputStream

2020-06-03 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HDFS-15379:


Assignee: ludun

> DataStreamer should reset thread interrupted state in createBlockOutputStream
> -
>
> Key: HDFS-15379
> URL: https://issues.apache.org/jira/browse/HDFS-15379
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient
>Affects Versions: 2.7.7, 3.1.3
>Reporter: ludun
>Assignee: ludun
>Priority: Major
> Attachments: HDFS-15379.001.patch, HDFS-15379.002.patch, 
> HDFS-15379.003.patch, HDFS-15379.004.patch
>
>
> In createBlockOutputStream if thread was interrupted becuase timeout to 
> conenct to DataNode.
> {code}2020-05-27 18:32:53,310 | DEBUG | Connecting to datanode 
> xx.xx.xx.xx:25009 | DataStreamer.java:251
> 2020-05-27 18:33:50,457 | INFO | Exception in createBlockOutputStream 
> blk_1115121199_41386360 | DataStreamer.java:1854
>  java.io.InterruptedIOException: Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/xx.xx.xx.xx:40370 
> remote=/xx.xx.xx.xx:25009]. 615000 millis timeout left.
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342)
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
>  at java.io.FilterInputStream.read(FilterInputStream.java:83)
>  at java.io.FilterInputStream.read(FilterInputStream.java:83)
>  at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:551)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1826)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1743)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718)
> {code}
> then abandonBlockrpc to namenode also failed due to interrupted exception 
> immediately.
> {code}2020-05-27 18:33:50,461 | DEBUG | Connecting to xx/xx.xx.xx.xx:25000 | 
> Client.java:814
> 2020-05-27 18:33:50,462 | DEBUG | Failed to connect to server: 
> xx/xx.xx.xx.xx:25000: try once and fail. | Client.java:956
>  java.nio.channels.ClosedByInterruptException
>  at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>  at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:659)
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>  at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
>  at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:720)
>  at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:823)
>  at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:436)
>  at org.apache.hadoop.ipc.Client.getConnection(Client.java:1613)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1444)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1397)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:234)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>  at com.sun.proxy.$Proxy10.abandonBlock(Unknown Source)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.abandonBlock(ClientNamenodeProtocolTranslatorPB.java:509)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>  at com.sun.proxy.$Proxy11.abandonBlock(Unknown Source)
>  at 
> org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1748)
>  at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:718)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@

[jira] [Updated] (HDFS-15326) TestDataNodeErasureCodingMetrics::testFullBlock fails intermittently

2020-05-02 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15326:
-
Description: 
Sample failing build: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1992/2/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testFullBlock/
Sample failing stack:
{code}
Error Message

Wrongly computed block reconstruction work

Stacktrace

java.lang.AssertionError: Wrongly computed block reconstruction work
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.doTest(TestDataNodeErasureCodingMetrics.java:205)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testFullBlock(TestDataNodeErasureCodingMetrics.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}

  was:
Sample failing build: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1992/2/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testFullBlock/
Sample failing stack:
{code}
Error Message
Wrongly computed block reconstruction work
Stacktrace
java.lang.AssertionError: Wrongly computed block reconstruction work
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.doTest(TestDataNodeErasureCodingMetrics.java:205)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testFullBlock(TestDataNodeErasureCodingMetrics.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}


> TestDataNodeErasureCodingMetrics::testFullBlock fails intermittently
> 
>
> Key: HDFS-15326
> URL: https://issues.apache.org/jira/browse/HDFS-15326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, test
>Affects Versions: 3.4.0
>Reporter: Mingliang Liu
>Priority: Major
>
> Sample failing build: 
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1992/2/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testFullBlock/
> Sample failing stack:
> {code}
> Error Message
> Wrongly computed block reconstruction work
> Stacktrace
> java.lang.AssertionError: Wrongly computed block reconstruction work
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.doTest(TestDataNodeErasureCodingMetrics.java:205)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testFullBlock(TestDataNodeErasureCodingMetrics.java:97)
>   at

[jira] [Commented] (HDFS-15327) TestDataNodeErasureCodingMetrics.testReconstructionBytesPartialGroup3 fails intermittently

2020-05-02 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17098214#comment-17098214
 ] 

Mingliang Liu commented on HDFS-15327:
--

Might be related to HDFS-15326. Linking it here.

> TestDataNodeErasureCodingMetrics.testReconstructionBytesPartialGroup3 fails 
> intermittently
> --
>
> Key: HDFS-15327
> URL: https://issues.apache.org/jira/browse/HDFS-15327
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, test
>Affects Versions: 3.4.0
>Reporter: Mingliang Liu
>Priority: Major
>
> Sample stack trace:
> {code}
> Error Message
> ecReconstructionBytesRead should be  expected:<6501170> but was:<0>
> Stacktrace
> java.lang.AssertionError: ecReconstructionBytesRead should be  
> expected:<6501170> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testReconstructionBytesPartialGroup3(TestDataNodeErasureCodingMetrics.java:150)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> Sample failing build:
> - https://builds.apache.org/job/PreCommit-HDFS-Build/29226/testReport/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15327) TestDataNodeErasureCodingMetrics.testReconstructionBytesPartialGroup3 fails intermittently

2020-05-02 Thread Mingliang Liu (Jira)
Mingliang Liu created HDFS-15327:


 Summary: 
TestDataNodeErasureCodingMetrics.testReconstructionBytesPartialGroup3 fails 
intermittently
 Key: HDFS-15327
 URL: https://issues.apache.org/jira/browse/HDFS-15327
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, erasure-coding, test
Affects Versions: 3.4.0
Reporter: Mingliang Liu


Sample stack trace:

{code}
Error Message
ecReconstructionBytesRead should be  expected:<6501170> but was:<0>
Stacktrace
java.lang.AssertionError: ecReconstructionBytesRead should be  
expected:<6501170> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testReconstructionBytesPartialGroup3(TestDataNodeErasureCodingMetrics.java:150)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}

Sample failing build:
- https://builds.apache.org/job/PreCommit-HDFS-Build/29226/testReport/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15326) TestDataNodeErasureCodingMetrics::testFullBlock fails intermittently

2020-05-02 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15326:
-
Issue Type: Bug  (was: Test)

> TestDataNodeErasureCodingMetrics::testFullBlock fails intermittently
> 
>
> Key: HDFS-15326
> URL: https://issues.apache.org/jira/browse/HDFS-15326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, test
>Affects Versions: 3.4.0
>Reporter: Mingliang Liu
>Priority: Major
>
> Sample failing build: 
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1992/2/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testFullBlock/
> Sample failing stack:
> {code}
> Error Message
> Wrongly computed block reconstruction work
> Stacktrace
> java.lang.AssertionError: Wrongly computed block reconstruction work
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.doTest(TestDataNodeErasureCodingMetrics.java:205)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testFullBlock(TestDataNodeErasureCodingMetrics.java:97)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15326) TestDataNodeErasureCodingMetrics::testFullBlock fails intermittently

2020-05-02 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15326:
-
Component/s: erasure-coding

> TestDataNodeErasureCodingMetrics::testFullBlock fails intermittently
> 
>
> Key: HDFS-15326
> URL: https://issues.apache.org/jira/browse/HDFS-15326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, test
>Affects Versions: 3.4.0
>Reporter: Mingliang Liu
>Priority: Major
>
> Sample failing build: 
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1992/2/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testFullBlock/
> Sample failing stack:
> {code}
> Error Message
> Wrongly computed block reconstruction work
> Stacktrace
> java.lang.AssertionError: Wrongly computed block reconstruction work
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.doTest(TestDataNodeErasureCodingMetrics.java:205)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testFullBlock(TestDataNodeErasureCodingMetrics.java:97)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15325) TestRefreshCallQueue is failing due to changed CallQueue constructor

2020-05-02 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15325:
-
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}}. Thanks [~shv] for filing this and found the root cause. 
Thanks [~fengnanli] for providing a patch. Thanks [~ayushtkn] for discussion.

> TestRefreshCallQueue is failing due to changed CallQueue constructor
> 
>
> Key: HDFS-15325
> URL: https://issues.apache.org/jira/browse/HDFS-15325
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Konstantin Shvachko
>Assignee: Fengnan Li
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15325.001.patch
>
>
> {{TestRefreshCallQueue.MockCallQueue}} cannot be instantiated, as it is 
> missing a parameter in the constructor, which was added by HADOOP-17010.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15325) TestRefreshCallQueue is failing due to changed CallQueue constructor

2020-05-02 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17098149#comment-17098149
 ] 

Mingliang Liu commented on HDFS-15325:
--

+1.

Will commit after a clean QA. 

> TestRefreshCallQueue is failing due to changed CallQueue constructor
> 
>
> Key: HDFS-15325
> URL: https://issues.apache.org/jira/browse/HDFS-15325
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Konstantin Shvachko
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HDFS-15325.001.patch
>
>
> {{TestRefreshCallQueue.MockCallQueue}} cannot be instantiated, as it is 
> missing a parameter in the constructor, which was added by HADOOP-17010.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15320) StringIndexOutOfBoundsException in HostRestrictingAuthorizationFilter

2020-05-02 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15320:
-
Fix Version/s: 3.4.0
   3.3.1
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to {{branch-3.3}} and {{trunk}} branches. Thanks for filing and 
providing a patch [~aajisaka]. Thanks for your review [~clayb]

> StringIndexOutOfBoundsException in HostRestrictingAuthorizationFilter
> -
>
> Key: HDFS-15320
> URL: https://issues.apache.org/jira/browse/HDFS-15320
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
> Environment: HostRestrictingAuthorizationFilter (HDFS-14234) is 
> enabled
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
>
> When there is a request to "http://:/" without "webhdfs/v1" 
> suffix, DN returns 500 response code and throws 
> StringIndexOutOfBoundsException as follows: 
> {noformat}
> 2020-05-01 16:10:20,220 ERROR 
> org.apache.hadoop.hdfs.server.datanode.web.HostRestrictingAuthorizationFilterHandler:
>  Exception in HostRestrictingAuthorizationFilterHandler
> java.lang.StringIndexOutOfBoundsException: String index out of range: -10
> at java.base/java.lang.String.substring(String.java:1841)
> at 
> org.apache.hadoop.hdfs.server.common.HostRestrictingAuthorizationFilter.handleInteraction(HostRestrictingAuthorizationFilter.java:234)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.HostRestrictingAuthorizationFilterHandler.channelRead0(HostRestrictingAuthorizationFilterHandler.java:155)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.HostRestrictingAuthorizationFilterHandler.channelRead0(HostRestrictingAuthorizationFilterHandler.java:58)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:328)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:302)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
> at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
> at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931)
> at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635)
> at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514)
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
> at 
> io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
> at 
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
> at java.base/java.lang.Thread.run(Thread.java:834)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Closed] (HDFS-15324) TestRefreshCallQueue::testRefresh fails intermittently

2020-05-02 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu closed HDFS-15324.


> TestRefreshCallQueue::testRefresh fails intermittently
> --
>
> Key: HDFS-15324
> URL: https://issues.apache.org/jira/browse/HDFS-15324
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode, test
>Affects Versions: 3.4.0
>Reporter: Mingliang Liu
>Priority: Major
>
> This test seems flaky. It failed previously see HDFS-10253 and now it is 
> failing intermittently in {{trunk}}. Not sure we should use the same fix to 
> Mock as it was used in HDFS-10253.
> Sample failing builds:
> - 
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1992/2/testReport/org.apache.hadoop/TestRefreshCallQueue/testRefresh/
> - 
> https://builds.apache.org/job/PreCommit-HDFS-Build/29221/testReport/org.apache.hadoop/TestRefreshCallQueue/testRefresh/
> Sample failing stack is:
> {code}
> Error Message
> org.apache.hadoop.TestRefreshCallQueue$MockCallQueue could not be constructed.
> Stacktrace
> java.lang.RuntimeException: 
> org.apache.hadoop.TestRefreshCallQueue$MockCallQueue could not be constructed.
>   at 
> org.apache.hadoop.ipc.CallQueueManager.createCallQueueInstance(CallQueueManager.java:193)
>   at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:83)
>   at org.apache.hadoop.ipc.Server.(Server.java:3087)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:1039)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:427)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:347)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:848)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:475)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:857)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:763)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:1014)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:987)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1756)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1332)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1101)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:974)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:906)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:534)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:493)
>   at 
> org.apache.hadoop.TestRefreshCallQueue.setUp(TestRefreshCallQueue.java:67)
>   at 
> org.apache.hadoop.TestRefreshCallQueue.testRefresh(TestRefreshCallQueue.java:115)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.jav

[jira] [Resolved] (HDFS-15324) TestRefreshCallQueue::testRefresh fails intermittently

2020-05-02 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu resolved HDFS-15324.
--
Resolution: Duplicate

> TestRefreshCallQueue::testRefresh fails intermittently
> --
>
> Key: HDFS-15324
> URL: https://issues.apache.org/jira/browse/HDFS-15324
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: namenode, test
>Affects Versions: 3.4.0
>Reporter: Mingliang Liu
>Priority: Major
>
> This test seems flaky. It failed previously see HDFS-10253 and now it is 
> failing intermittently in {{trunk}}. Not sure we should use the same fix to 
> Mock as it was used in HDFS-10253.
> Sample failing builds:
> - 
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1992/2/testReport/org.apache.hadoop/TestRefreshCallQueue/testRefresh/
> - 
> https://builds.apache.org/job/PreCommit-HDFS-Build/29221/testReport/org.apache.hadoop/TestRefreshCallQueue/testRefresh/
> Sample failing stack is:
> {code}
> Error Message
> org.apache.hadoop.TestRefreshCallQueue$MockCallQueue could not be constructed.
> Stacktrace
> java.lang.RuntimeException: 
> org.apache.hadoop.TestRefreshCallQueue$MockCallQueue could not be constructed.
>   at 
> org.apache.hadoop.ipc.CallQueueManager.createCallQueueInstance(CallQueueManager.java:193)
>   at 
> org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:83)
>   at org.apache.hadoop.ipc.Server.(Server.java:3087)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:1039)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:427)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:347)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:848)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:475)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:857)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:763)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:1014)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:987)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1756)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1332)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1101)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:974)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:906)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:534)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:493)
>   at 
> org.apache.hadoop.TestRefreshCallQueue.setUp(TestRefreshCallQueue.java:67)
>   at 
> org.apache.hadoop.TestRefreshCallQueue.testRefresh(TestRefreshCallQueue.java:115)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.exe

[jira] [Commented] (HDFS-15325) TestRefreshCallQueue is failing due to changed CallQueue constructor

2020-05-02 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17098099#comment-17098099
 ] 

Mingliang Liu commented on HDFS-15325:
--

Yeah, Thanks [~ayushtkn].

[~shv] and I filed at the same time. I can close the other one, since this one 
has the clear "is broken by" filed. 



> TestRefreshCallQueue is failing due to changed CallQueue constructor
> 
>
> Key: HDFS-15325
> URL: https://issues.apache.org/jira/browse/HDFS-15325
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Konstantin Shvachko
>Priority: Major
>
> {{TestRefreshCallQueue.MockCallQueue}} cannot be instantiated, as it is 
> missing a parameter in the constructor, which was added by HADOOP-17010.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15326) TestDataNodeErasureCodingMetrics::testFullBlock fails intermittently

2020-05-02 Thread Mingliang Liu (Jira)
Mingliang Liu created HDFS-15326:


 Summary: TestDataNodeErasureCodingMetrics::testFullBlock fails 
intermittently
 Key: HDFS-15326
 URL: https://issues.apache.org/jira/browse/HDFS-15326
 Project: Hadoop HDFS
  Issue Type: Test
  Components: datanode, test
Affects Versions: 3.4.0
Reporter: Mingliang Liu


Sample failing build: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1992/2/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testFullBlock/
Sample failing stack:
{code}
Error Message
Wrongly computed block reconstruction work
Stacktrace
java.lang.AssertionError: Wrongly computed block reconstruction work
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.doTest(TestDataNodeErasureCodingMetrics.java:205)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testFullBlock(TestDataNodeErasureCodingMetrics.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13179) TestLazyPersistReplicaRecovery#testDnRestartWithSavedReplicas fails intermittently

2020-05-02 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17098085#comment-17098085
 ] 

Mingliang Liu commented on HDFS-13179:
--

Seeing this is still happening in {{trunk}} , see e.g. 
[https://builds.apache.org/job/hadoop-multibranch/job/PR-1992/2/testReport/org.apache.hadoop.hdfs.server.datanode.fsdataset.impl/TestLazyPersistReplicaRecovery/testDnRestartWithSavedReplicas/]

> TestLazyPersistReplicaRecovery#testDnRestartWithSavedReplicas fails 
> intermittently
> --
>
> Key: HDFS-13179
> URL: https://issues.apache.org/jira/browse/HDFS-13179
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Gabor Bota
>Assignee: Ahmed Hussein
>Priority: Critical
> Fix For: 3.0.4, 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HDFS-13179-branch-2.10.003.patch, HDFS-13179.001.patch, 
> HDFS-13179.002.patch, HDFS-13179.003.patch, test runs.zip
>
>
> The error caused by TimeoutException because the test is waiting to ensure 
> that the file is replicated to DISK storage but the replication can't be 
> finished to DISK during the 30s timeout in ensureFileReplicasOnStorageType(), 
> but the file is still on RAM_DISK - so there is no data loss.
> Adding the following to TestLazyPersistReplicaRecovery.java:56 essentially 
> fixes the flakiness. 
> {code:java}
> try {
>   ensureFileReplicasOnStorageType(path1, DEFAULT);
> }catch (TimeoutException t){
>   LOG.warn("We got \"" + t.getMessage() + "\" so trying to find data on 
> RAM_DISK");
>   ensureFileReplicasOnStorageType(path1, RAM_DISK);
> }
>   }
> {code}
> Some thoughts:
> * Successful and failed tests run similar to the point when datanode 
> restarts. Restart line is the following in the log: LazyPersistTestCase - 
> Restarting the DataNode
> * There is a line which only occurs in the failed test: *addStoredBlock: 
> Redundant addStoredBlock request received for blk_1073741825_1001 on node 
> 127.0.0.1:49455 size 5242880*
> * This redundant request at BlockManager#addStoredBlock could be the main 
> reason for the test fail. Something wrong with the gen stamp? Corrupt 
> replicas? 
> =
> Current fail ratio based on my test of TestLazyPersistReplicaRecovery: 
> 1000 runs, 34 failures (3.4% fail)
> Failure rate analysis:
> TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas: 3.4%
> 33 failures caused by: {noformat}
> java.util.concurrent.TimeoutException: Timed out waiting for condition. 
> Thread diagnostics: Timestamp: 2018-01-05 11:50:34,964 "IPC Server handler 6 
> on 39589" 
> {noformat}
> 1 failure caused by: {noformat}
> java.net.BindException: Problem binding to [localhost:56729] 
> java.net.BindException: Address already in use; For more details see: 
> http://wiki.apache.org/hadoop/BindException at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas(TestLazyPersistReplicaRecovery.java:49)
>  Caused by: java.net.BindException: Address already in use at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas(TestLazyPersistReplicaRecovery.java:49)
> {noformat}
> =
> Example stacktrace:
> {noformat}
> Timed out waiting for condition. Thread diagnostics:
> Timestamp: 2017-11-01 10:36:49,499
> "Thread-1" prio=5 tid=13 runnable
> java.lang.Thread.State: RUNNABLE
> at java.lang.Thread.dumpThreads(Native Method)
> at java.lang.Thread.getAllStackTraces(Thread.java:1610)
> at 
> org.apache.hadoop.test.TimedOutTestsListener.buildThreadDump(TimedOutTestsListener.java:87)
> at 
> org.apache.hadoop.test.TimedOutTestsListener.buildThreadDiagnosticString(TimedOutTestsListener.java:73)
> at org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:369)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.ensureFileReplicasOnStorageType(LazyPersistTestCase.java:140)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas(TestLazyPersistReplicaRecovery.java:54)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail:

[jira] [Created] (HDFS-15324) TestRefreshCallQueue::testRefresh fails intermittently

2020-05-02 Thread Mingliang Liu (Jira)
Mingliang Liu created HDFS-15324:


 Summary: TestRefreshCallQueue::testRefresh fails intermittently
 Key: HDFS-15324
 URL: https://issues.apache.org/jira/browse/HDFS-15324
 Project: Hadoop HDFS
  Issue Type: Test
  Components: namenode, test
Affects Versions: 3.4.0
Reporter: Mingliang Liu


This test seems flaky. It failed previously see HDFS-10253 and now it is 
failing intermittently in {{trunk}}. Not sure we should use the same fix to 
Mock as it was used in HDFS-10253.

Sample failing builds:
- 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1992/2/testReport/org.apache.hadoop/TestRefreshCallQueue/testRefresh/
- 
https://builds.apache.org/job/PreCommit-HDFS-Build/29221/testReport/org.apache.hadoop/TestRefreshCallQueue/testRefresh/

Sample failing stack is:
{code}
Error Message
org.apache.hadoop.TestRefreshCallQueue$MockCallQueue could not be constructed.
Stacktrace
java.lang.RuntimeException: 
org.apache.hadoop.TestRefreshCallQueue$MockCallQueue could not be constructed.
at 
org.apache.hadoop.ipc.CallQueueManager.createCallQueueInstance(CallQueueManager.java:193)
at 
org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:83)
at org.apache.hadoop.ipc.Server.(Server.java:3087)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:1039)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:427)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:347)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:848)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:475)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:857)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:763)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:1014)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:987)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1756)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1332)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1101)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:974)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:906)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:534)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:493)
at 
org.apache.hadoop.TestRefreshCallQueue.setUp(TestRefreshCallQueue.java:67)
at 
org.apache.hadoop.TestRefreshCallQueue.testRefresh(TestRefreshCallQueue.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProc

[jira] [Updated] (HDFS-15297) TestNNHandlesBlockReportPerStorage::blockReport_02 fails intermittently in trunk

2020-04-25 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-15297:
-
Fix Version/s: 3.4.0
   3.3.1
   2.10.1
   3.2.2
   3.1.4
   2.9.3
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Findbugs warnings are not related and already tracked by HDFS-15298.

Committed to all fixed versions. Thanks for your contribution [~ayushtkn].

> TestNNHandlesBlockReportPerStorage::blockReport_02 fails intermittently in 
> trunk
> 
>
> Key: HDFS-15297
> URL: https://issues.apache.org/jira/browse/HDFS-15297
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, test
>Affects Versions: 3.4.0
>Reporter: Mingliang Liu
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 2.9.3, 3.1.4, 3.2.2, 2.10.1, 3.3.1, 3.4.0
>
> Attachments: HDFS-15297-01.patch, HDFS-15297-02.patch, 
> HDFS-15297-03.patch
>
>
> It fails intermittently on {{trunk}} branch. Not sure about other branches. 
> Example builds are:
> - 
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1964/4/testReport/org.apache.hadoop.hdfs.server.datanode/TestNNHandlesBlockReportPerStorage/blockReport_02/
> - 
> Sample exception stack:
> {quote}
> java.lang.AssertionError: Wrong number of MissingBlocks is found expected:<2> 
> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReportTestBase.blockReport_02(BlockReportTestBase.java:336)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15297) TestNNHandlesBlockReportPerStorage::blockReport_02 fails intermittently in trunk

2020-04-25 Thread Mingliang Liu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17092391#comment-17092391
 ] 

Mingliang Liu commented on HDFS-15297:
--

+1 after the checkstyle is fixed.

> TestNNHandlesBlockReportPerStorage::blockReport_02 fails intermittently in 
> trunk
> 
>
> Key: HDFS-15297
> URL: https://issues.apache.org/jira/browse/HDFS-15297
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, test
>Affects Versions: 3.4.0
>Reporter: Mingliang Liu
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-15297-01.patch, HDFS-15297-02.patch, 
> HDFS-15297-03.patch
>
>
> It fails intermittently on {{trunk}} branch. Not sure about other branches. 
> Example builds are:
> - 
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1964/4/testReport/org.apache.hadoop.hdfs.server.datanode/TestNNHandlesBlockReportPerStorage/blockReport_02/
> - 
> Sample exception stack:
> {quote}
> java.lang.AssertionError: Wrong number of MissingBlocks is found expected:<2> 
> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReportTestBase.blockReport_02(BlockReportTestBase.java:336)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >