[jira] [Commented] (HDFS-9530) huge Non-DFS Used in hadoop 2.6.2 & 2.7.1

2016-06-14 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15331251#comment-15331251
 ] 

Brahma Reddy Battula commented on HDFS-9530:


Thanks to all.Uploaded patch..

Attached test case targeted current problem,  Tests available in 
{{TestSpaceReservation}} covers many other cases. 
IMO, If any more tests required, can be added as follow-up jira. Right?

I am thinking, this should go with 2.7.3 release.

> huge Non-DFS Used in hadoop 2.6.2 & 2.7.1
> -
>
> Key: HDFS-9530
> URL: https://issues.apache.org/jira/browse/HDFS-9530
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Fei Hui
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9530-01.patch, HDFS-9530-02.patch, 
> HDFS-9530-branch-2.7-001.patch
>
>
> i think there are bugs in HDFS
> ===
> here is config
>   
> dfs.datanode.data.dir
> 
> 
> file:///mnt/disk4,file:///mnt/disk1,file:///mnt/disk3,file:///mnt/disk2
> 
>   
> here is dfsadmin report 
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 238604832768 (222.22 GB)
> DFS Remaining: 215772954624 (200.95 GB)
> DFS Used: 22831878144 (21.26 GB)
> DFS Used%: 9.57%
> Under replicated blocks: 4
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7190958080 (6.70 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72343986176 (67.38 GB)
> DFS Used%: 8.96%
> DFS Remaining%: 90.14%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:02 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7219073024 (6.72 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72315871232 (67.35 GB)
> DFS Used%: 9.00%
> DFS Remaining%: 90.11%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 8421847040 (7.84 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 71113097216 (66.23 GB)
> DFS Used%: 10.49%
> DFS Remaining%: 88.61%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> 
> when running hive job , dfsadmin report as follows
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 108266011136 (100.83 GB)
> DFS Remaining: 80078416384 (74.58 GB)
> DFS Used: 28187594752 (26.25 GB)
> DFS Used%: 26.04%
> Under replicated blocks: 7
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9015627776 (8.40 GB)
> Non DFS Used: 44303742464 (41.26 GB)
> DFS Remaining: 26937047552 (25.09 GB)
> DFS Used%: 11.23%
> DFS Remaining%: 33.56%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 693
> Last contact: Wed Dec 09 15:37:35 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9163116544 (8.53 GB)
> Non DFS Used: 47895897600 (44.61 GB)
> DFS Remaining: 23197403648 (21.60 GB)
> DFS Used%: 11.42%
> DFS Remaining%: 28.90%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 750
> Last contact: Wed Dec 09 15:37:36 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (7

[jira] [Commented] (HDFS-9906) Remove spammy log spew when a datanode is restarted

2016-06-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15331249#comment-15331249
 ] 

Akira AJISAKA commented on HDFS-9906:
-

Updated fix version. Hi [~xyao], now branch-2.7.3 is cut, so if you want to 
include this in 2.7.3, you need to cherry-pick this to branch-2.7.3 as well.

> Remove spammy log spew when a datanode is restarted
> ---
>
> Key: HDFS-9906
> URL: https://issues.apache.org/jira/browse/HDFS-9906
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: Elliott Clark
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.4
>
> Attachments: HDFS-9906.patch
>
>
> {code}
> WARN BlockStateChange: BLOCK* addStoredBlock: Redundant addStoredBlock 
> request received for blk_1109897077_36157149 on node 192.168.1.1:50010 size 
> 268435456
> {code}
> This happens wy too much to add any useful information. We should either 
> move this to a different level or only warn once per machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9530) huge Non-DFS Used in hadoop 2.6.2 & 2.7.1

2016-06-14 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9530:
---
Attachment: HDFS-9530-02.patch
HDFS-9530-branch-2.7-001.patch

> huge Non-DFS Used in hadoop 2.6.2 & 2.7.1
> -
>
> Key: HDFS-9530
> URL: https://issues.apache.org/jira/browse/HDFS-9530
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Fei Hui
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9530-01.patch, HDFS-9530-02.patch, 
> HDFS-9530-branch-2.7-001.patch
>
>
> i think there are bugs in HDFS
> ===
> here is config
>   
> dfs.datanode.data.dir
> 
> 
> file:///mnt/disk4,file:///mnt/disk1,file:///mnt/disk3,file:///mnt/disk2
> 
>   
> here is dfsadmin report 
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 238604832768 (222.22 GB)
> DFS Remaining: 215772954624 (200.95 GB)
> DFS Used: 22831878144 (21.26 GB)
> DFS Used%: 9.57%
> Under replicated blocks: 4
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7190958080 (6.70 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72343986176 (67.38 GB)
> DFS Used%: 8.96%
> DFS Remaining%: 90.14%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:02 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7219073024 (6.72 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72315871232 (67.35 GB)
> DFS Used%: 9.00%
> DFS Remaining%: 90.11%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 8421847040 (7.84 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 71113097216 (66.23 GB)
> DFS Used%: 10.49%
> DFS Remaining%: 88.61%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> 
> when running hive job , dfsadmin report as follows
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 108266011136 (100.83 GB)
> DFS Remaining: 80078416384 (74.58 GB)
> DFS Used: 28187594752 (26.25 GB)
> DFS Used%: 26.04%
> Under replicated blocks: 7
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9015627776 (8.40 GB)
> Non DFS Used: 44303742464 (41.26 GB)
> DFS Remaining: 26937047552 (25.09 GB)
> DFS Used%: 11.23%
> DFS Remaining%: 33.56%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 693
> Last contact: Wed Dec 09 15:37:35 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9163116544 (8.53 GB)
> Non DFS Used: 47895897600 (44.61 GB)
> DFS Remaining: 23197403648 (21.60 GB)
> DFS Used%: 11.42%
> DFS Remaining%: 28.90%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 750
> Last contact: Wed Dec 09 15:37:36 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 10008850432 (9.32 GB)
> Non DFS Used: 40303602176 (37.54 GB)
> DFS Remaining: 29943965184 (27.89 GB)
> DFS Used%: 12.47%
> DFS Remaining%: 37.31%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (

[jira] [Updated] (HDFS-9906) Remove spammy log spew when a datanode is restarted

2016-06-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-9906:

Fix Version/s: 2.7.4

> Remove spammy log spew when a datanode is restarted
> ---
>
> Key: HDFS-9906
> URL: https://issues.apache.org/jira/browse/HDFS-9906
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: Elliott Clark
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.4
>
> Attachments: HDFS-9906.patch
>
>
> {code}
> WARN BlockStateChange: BLOCK* addStoredBlock: Redundant addStoredBlock 
> request received for blk_1109897077_36157149 on node 192.168.1.1:50010 size 
> 268435456
> {code}
> This happens wy too much to add any useful information. We should either 
> move this to a different level or only warn once per machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9906) Remove spammy log spew when a datanode is restarted

2016-06-14 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15331228#comment-15331228
 ] 

Xiaoyu Yao commented on HDFS-9906:
--

Cherrypicked to branch-2.7.

> Remove spammy log spew when a datanode is restarted
> ---
>
> Key: HDFS-9906
> URL: https://issues.apache.org/jira/browse/HDFS-9906
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.2
>Reporter: Elliott Clark
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-9906.patch
>
>
> {code}
> WARN BlockStateChange: BLOCK* addStoredBlock: Redundant addStoredBlock 
> request received for blk_1109897077_36157149 on node 192.168.1.1:50010 size 
> 268435456
> {code}
> This happens wy too much to add any useful information. We should either 
> move this to a different level or only warn once per machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10256) Use GenericTestUtils.getTestDir method in tests for temporary directories

2016-06-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15331224#comment-15331224
 ] 

Akira AJISAKA commented on HDFS-10256:
--

Mostly looks good to me. Would you fix the checkstyle warnings? I'm +1 if that 
is addressed.

> Use GenericTestUtils.getTestDir method in tests for temporary directories
> -
>
> Key: HDFS-10256
> URL: https://issues.apache.org/jira/browse/HDFS-10256
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10531) Add EC policy and storage policy related usage summarization function to dfs du command

2016-06-14 Thread GAO Rui (JIRA)
GAO Rui created HDFS-10531:
--

 Summary: Add EC policy and storage policy related usage 
summarization function to dfs du command
 Key: HDFS-10531
 URL: https://issues.apache.org/jira/browse/HDFS-10531
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: GAO Rui


Currently du command output:
{code}
[ ~]$ hdfs dfs -du  -h /home/rgao/
0  /home/rgao/.Trash
0  /home/rgao/.staging
100 M  /home/rgao/ds
250 M  /home/rgao/ds-2
200 M  /home/rgao/noECBackup-ds
500 M  /home/rgao/noECBackup-ds-2
{code}

For hdfs users and administrators, EC policy and storage policy related usage 
summarization would be very helpful when managing storages of cluster. The 
imitate output of du could be like the following.
{code}
[ ~]$ hdfs dfs -du  -h -t( total, parameter to be added) /home/rgao
 
0  /home/rgao/.Trash
0  /home/rgao/.staging
[Archive] [EC:RS-DEFAULT-6-3-64k] 100 M  /home/rgao/ds
[DISK] [EC:RS-DEFAULT-6-3-64k] 250 M  /home/rgao/ds-2
[DISK] [Replica] 200 M  /home/rgao/noECBackup-ds
[DISK] [Replica] 500 M  /home/rgao/noECBackup-ds-2
 
Total:
 
[Archive][EC:RS-DEFAULT-6-3-64k]  100 M
[Archive][Replica]0 M
[DISK] [EC:RS-DEFAULT-6-3-64k] 250 M
[DISK] [Replica]   700 M  
 
[Archive][ALL] 100M
[DISK][ALL]  950M
[ALL] [EC:RS-DEFAULT-6-3-64k]350M
[ALL] [Replica]  700M
{code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy

2016-06-14 Thread GAO Rui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GAO Rui updated HDFS-10530:
---
Description: 
This issue was found by [~tfukudom].

Under RS-DEFAULT-6-3-64k EC policy, 
1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) 
of the cluster.
2. Reconstruction work would be scheduled if the 6th rack is added. 
3. While adding the 7th rack or more racks will not trigger reconstruction 
work. 

Based on default EC block placement policy defined in 
“BlockPlacementPolicyRackFaultTolerant.java”, EC file should be able to be 
scheduled to distribute to 9 racks if possible.

In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , 
*numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, 
instead of *getRealDataBlockNum()*.

  was:
Under RS-DEFAULT-6-3-64k EC policy, 
This issue was found by [~tfukudom].
1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) 
of the cluster.
2. Reconstruction work would be scheduled if the 6th rack is added. 
3. While adding the 7th rack or more racks will not trigger reconstruction 
work. 

Based on default EC block placement policy defined in 
“BlockPlacementPolicyRackFaultTolerant.java”, EC file should be able to be 
scheduled to distribute to 9 racks if possible.

In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , 
*numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, 
instead of *getRealDataBlockNum()*.


> BlockManager reconstruction work scheduling should correctly adhere to EC 
> block placement policy
> 
>
> Key: HDFS-10530
> URL: https://issues.apache.org/jira/browse/HDFS-10530
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: GAO Rui
>Assignee: GAO Rui
>
> This issue was found by [~tfukudom].
> Under RS-DEFAULT-6-3-64k EC policy, 
> 1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) 
> of the cluster.
> 2. Reconstruction work would be scheduled if the 6th rack is added. 
> 3. While adding the 7th rack or more racks will not trigger reconstruction 
> work. 
> Based on default EC block placement policy defined in 
> “BlockPlacementPolicyRackFaultTolerant.java”, EC file should be able to be 
> scheduled to distribute to 9 racks if possible.
> In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , 
> *numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, 
> instead of *getRealDataBlockNum()*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy

2016-06-14 Thread GAO Rui (JIRA)
GAO Rui created HDFS-10530:
--

 Summary: BlockManager reconstruction work scheduling should 
correctly adhere to EC block placement policy
 Key: HDFS-10530
 URL: https://issues.apache.org/jira/browse/HDFS-10530
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: GAO Rui
Assignee: GAO Rui


Under RS-DEFAULT-6-3-64k EC policy, 
This issue was found by [~tfukudom].
1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) 
of the cluster.
2. Reconstruction work would be scheduled if the 6th rack is added. 
3. While adding the 7th rack or more racks will not trigger reconstruction 
work. 

Based on default EC block placement policy defined in 
“BlockPlacementPolicyRackFaultTolerant.java”, EC file should be able to be 
scheduled to distribute to 9 racks if possible.

In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , 
*numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, 
instead of *getRealDataBlockNum()*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9650) Problem is logging of "Redundant addStoredBlock request received"

2016-06-14 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15331182#comment-15331182
 ] 

Xiaoyu Yao commented on HDFS-9650:
--

Thanks [~brahma] for the heads up. Yes, we do need to backport HDFS-9906 to 
branch-2.7. 
In our case, adding a dedicated serviceRPC port help avoiding the NN failover.  

> Problem is logging of "Redundant addStoredBlock request received"
> -
>
> Key: HDFS-9650
> URL: https://issues.apache.org/jira/browse/HDFS-9650
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Frode Halvorsen
>Assignee: Xiaoyu Yao
>
> Description;
> Hadoop 2.7.1. 2 namenodes in HA. 14 Datanodes.
> Enough CPU,disk and RAM.
> Just discovered that some datanodes must have been corrupted somehow.
> When restarting  a 'defect' ( works without failure except when restarting) 
> the active namenode suddenly is logging a lot of : "Redundant addStoredBlock 
> request received"
> and finally the failover-controller takes the namenode down, fails over to 
> other node. This node also starts logging the same, and as soon as the fisrt 
> node is bac online, the failover-controller again kill the active node, and 
> does failover.
> This node now was started after the datanode, and doesn't log "Redundant 
> addStoredBlock request received" anymore, and restart of the second name-node 
> works fine.
> If I again restarts the datanode- the process repeats itself.
> Problem is logging of "Redundant addStoredBlock request received" and why 
> does it happen ? 
> The failover-controller acts the same way as it did on 2.5/6 when we had a 
> lot of 'block does not belong to any replica'-messages. Namenode is too busy 
> to respond to heartbeats, and is taken down...
> To resolve this, I have to take down the datanode, delete all data from it, 
> and start it up. Then cluster will reproduce the missing blocks, and the 
> failing datanode is working fine again...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10529) Df reports incorrect usage when appending less than block size

2016-06-14 Thread Pranav Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranav Prakash updated HDFS-10529:
--
Attachment: HDFS-10529.000.patch

Attached is a patch that attempts to fix this.

Currently when any new block is created, first an RBW version is made and then 
it is finalized, passed to addFinalizedBlock() in BlockPoolSlice with the 
intended total block size.

By looking at the difference between getOriginalBytesReserved() and 
originalBytesReserved() it is possible to see how many bytes were actually 
written, and the dfsUsage can be increased by this amount. The usage for the 
meta file can be added directly since the old meta is removed upon new block 
creation.


> Df reports incorrect usage when appending less than block size
> --
>
> Key: HDFS-10529
> URL: https://issues.apache.org/jira/browse/HDFS-10529
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.2, 3.0.0-alpha1
>Reporter: Pranav Prakash
>Priority: Minor
>  Labels: datanode, fs, hdfs
> Attachments: HDFS-10529.000.patch
>
>
> Steps to recreate issue:
> 1. Create a 100MB file on HDFS cluster with 128MB blocksize and replication 
> factor 3
> 2. Append 100MB to the file
> 3. Df reports around 900MB even though it should only be around 600MB.
> Looking at the blocks confirms that df is incorrect, as there exist only two 
> blocks on each DN -- a 128MB block and a 72MB block.
> This issue seems to arise because BlockPoolSlice does not account for the 
> delta increase in dfsUsage when an append happens to a partially-filled 
> block, and instead naively adds the total block size. For instance, in the 
> example scenario when when block is "filled" from 100 to 128MB, 
> addFinalizedBlock() in BlockPoolSlice adds the size of the newly created 
> block into the total instead of accounting for the difference/delta in block 
> size between old and new.  This has the effect of double-counting the old 
> partially-filled block: it is counted once when it is first created (in the 
> example scenario when the 100MB file is created) and again when it becomes 
> part of the filled block (in the example scenario when the 128MB block is 
> formed form the initial 100MB block). Thus the perceived size becomes 100MB + 
> 128MB + 72 = 300 MB for each DN, or 900MB across the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10529) Df reports incorrect usage when appending less than block size

2016-06-14 Thread Pranav Prakash (JIRA)
Pranav Prakash created HDFS-10529:
-

 Summary: Df reports incorrect usage when appending less than block 
size
 Key: HDFS-10529
 URL: https://issues.apache.org/jira/browse/HDFS-10529
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.2, 3.0.0-alpha1
Reporter: Pranav Prakash
Priority: Minor


Steps to recreate issue:

1. Create a 100MB file on HDFS cluster with 128MB blocksize and replication 
factor 3
2. Append 100MB to the file
3. Df reports around 900MB even though it should only be around 600MB.

Looking at the blocks confirms that df is incorrect, as there exist only two 
blocks on each DN -- a 128MB block and a 72MB block.

This issue seems to arise because BlockPoolSlice does not account for the delta 
increase in dfsUsage when an append happens to a partially-filled block, and 
instead naively adds the total block size. For instance, in the example 
scenario when when block is "filled" from 100 to 128MB, addFinalizedBlock() in 
BlockPoolSlice adds the size of the newly created block into the total instead 
of accounting for the difference/delta in block size between old and new.  This 
has the effect of double-counting the old partially-filled block: it is counted 
once when it is first created (in the example scenario when the 100MB file is 
created) and again when it becomes part of the filled block (in the example 
scenario when the 128MB block is formed form the initial 100MB block). Thus the 
perceived size becomes 100MB + 128MB + 72 = 300 MB for each DN, or 900MB across 
the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-9530) huge Non-DFS Used in hadoop 2.6.2 & 2.7.1

2016-06-14 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HDFS-9530:
--

Assignee: Brahma Reddy Battula

> huge Non-DFS Used in hadoop 2.6.2 & 2.7.1
> -
>
> Key: HDFS-9530
> URL: https://issues.apache.org/jira/browse/HDFS-9530
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Fei Hui
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9530-01.patch
>
>
> i think there are bugs in HDFS
> ===
> here is config
>   
> dfs.datanode.data.dir
> 
> 
> file:///mnt/disk4,file:///mnt/disk1,file:///mnt/disk3,file:///mnt/disk2
> 
>   
> here is dfsadmin report 
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 238604832768 (222.22 GB)
> DFS Remaining: 215772954624 (200.95 GB)
> DFS Used: 22831878144 (21.26 GB)
> DFS Used%: 9.57%
> Under replicated blocks: 4
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7190958080 (6.70 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72343986176 (67.38 GB)
> DFS Used%: 8.96%
> DFS Remaining%: 90.14%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:02 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7219073024 (6.72 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72315871232 (67.35 GB)
> DFS Used%: 9.00%
> DFS Remaining%: 90.11%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 8421847040 (7.84 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 71113097216 (66.23 GB)
> DFS Used%: 10.49%
> DFS Remaining%: 88.61%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> 
> when running hive job , dfsadmin report as follows
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 108266011136 (100.83 GB)
> DFS Remaining: 80078416384 (74.58 GB)
> DFS Used: 28187594752 (26.25 GB)
> DFS Used%: 26.04%
> Under replicated blocks: 7
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9015627776 (8.40 GB)
> Non DFS Used: 44303742464 (41.26 GB)
> DFS Remaining: 26937047552 (25.09 GB)
> DFS Used%: 11.23%
> DFS Remaining%: 33.56%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 693
> Last contact: Wed Dec 09 15:37:35 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9163116544 (8.53 GB)
> Non DFS Used: 47895897600 (44.61 GB)
> DFS Remaining: 23197403648 (21.60 GB)
> DFS Used%: 11.42%
> DFS Remaining%: 28.90%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 750
> Last contact: Wed Dec 09 15:37:36 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 10008850432 (9.32 GB)
> Non DFS Used: 40303602176 (37.54 GB)
> DFS Remaining: 29943965184 (27.89 GB)
> DFS Used%: 12.47%
> DFS Remaining%: 37.31%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 632
> Last contact: Wed Dec 0

[jira] [Commented] (HDFS-9650) Problem is logging of "Redundant addStoredBlock request received"

2016-06-14 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15331126#comment-15331126
 ] 

Brahma Reddy Battula commented on HDFS-9650:


[~frha],can't we close this as duplicate of HDFS-9906..?  [~xyao] are you 
planning additional to HDFS-9906..? I think, we can backport HDFS-9906 to 
branch-2.7..

> Problem is logging of "Redundant addStoredBlock request received"
> -
>
> Key: HDFS-9650
> URL: https://issues.apache.org/jira/browse/HDFS-9650
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Frode Halvorsen
>Assignee: Xiaoyu Yao
>
> Description;
> Hadoop 2.7.1. 2 namenodes in HA. 14 Datanodes.
> Enough CPU,disk and RAM.
> Just discovered that some datanodes must have been corrupted somehow.
> When restarting  a 'defect' ( works without failure except when restarting) 
> the active namenode suddenly is logging a lot of : "Redundant addStoredBlock 
> request received"
> and finally the failover-controller takes the namenode down, fails over to 
> other node. This node also starts logging the same, and as soon as the fisrt 
> node is bac online, the failover-controller again kill the active node, and 
> does failover.
> This node now was started after the datanode, and doesn't log "Redundant 
> addStoredBlock request received" anymore, and restart of the second name-node 
> works fine.
> If I again restarts the datanode- the process repeats itself.
> Problem is logging of "Redundant addStoredBlock request received" and why 
> does it happen ? 
> The failover-controller acts the same way as it did on 2.5/6 when we had a 
> lot of 'block does not belong to any replica'-messages. Namenode is too busy 
> to respond to heartbeats, and is taken down...
> To resolve this, I have to take down the datanode, delete all data from it, 
> and start it up. Then cluster will reproduce the missing blocks, and the 
> failing datanode is working fine again...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9016) Display upgrade domain information in fsck

2016-06-14 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9016:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
 Release Note: New fsck option "-upgradedomains" has been added to display 
upgrade domains of any block.
   Status: Resolved  (was: Patch Available)

I have committed it to trunk and branch-2. Thanks [~eddyxu], [~aw] and 
[~andrew.wang] for the review.

> Display upgrade domain information in fsck
> --
>
> Key: HDFS-9016
> URL: https://issues.apache.org/jira/browse/HDFS-9016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.9.0
>
> Attachments: HDFS-9016-2.patch, HDFS-9016-3.patch, HDFS-9016-4.patch, 
> HDFS-9016-4.patch, HDFS-9016-branch-2-2.patch, HDFS-9016-branch-2.patch, 
> HDFS-9016.patch
>
>
> This will make it easy for people to use fsck to check block placement when 
> upgrade domain is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9005) Provide configuration support for upgrade domain

2016-06-14 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9005:
--
Fix Version/s: (was: 2.8.0)
   2.9.0

> Provide configuration support for upgrade domain
> 
>
> Key: HDFS-9005
> URL: https://issues.apache.org/jira/browse/HDFS-9005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-9005-2.patch, HDFS-9005-3.patch, HDFS-9005-4.patch, 
> HDFS-9005.patch
>
>
> As part of the upgrade domain feature, we need to provide a mechanism to 
> specify upgrade domain for each datanode. One way to accomplish that is to 
> allow admins specify an upgrade domain script that takes DN ip or hostname as 
> input and return the upgrade domain. Then namenode will use it at run time to 
> set {{DatanodeInfo}}'s upgrade domain string. The configuration can be 
> something like:
> {noformat}
> 
> dfs.namenode.upgrade.domain.script.file.name
> /etc/hadoop/conf/upgrade-domain.sh
> 
> {noformat}
> just like topology script, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10512) VolumeScanner may terminate to due NPE in DataNode.reportBadBlocks

2016-06-14 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330989#comment-15330989
 ] 

Yiqun Lin edited comment on HDFS-10512 at 6/15/16 1:51 AM:
---

The failed test {{TestPendingInvalidateBlock}} was tracked by HDFS-10426, other 
failed unit tests are not related, thanks for review.


was (Author: linyiqun):
The failed test {{TestPendingInvalidateBlock }} was tracked by HDFS-10426, 
other failed unit tests are not related, thanks for review.

> VolumeScanner may terminate to due NPE in DataNode.reportBadBlocks
> --
>
> Key: HDFS-10512
> URL: https://issues.apache.org/jira/browse/HDFS-10512
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
> Attachments: HDFS-10512.001.patch, HDFS-10512.002.patch
>
>
> VolumeScanner may terminate due to unexpected NullPointerException thrown in 
> {{DataNode.reportBadBlocks()}}. This is different from HDFS-8850/HDFS-9190
> I observed this bug in a production CDH 5.5.1 cluster and the same bug still 
> persist in upstream trunk.
> {noformat}
> 2016-04-07 20:30:53,830 WARN 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> BP-1800173197-10.204.68.5-125156296:blk_1170134484_96468685 on /dfs/dn
> 2016-04-07 20:30:53,831 ERROR 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/dfs/dn, 
> DS-89b72832-2a8c-48f3-8235-48e6c5eb5ab3) exiting because of exception
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.reportBadBlocks(DataNode.java:1018)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner$ScanResultHandler.handle(VolumeScanner.java:287)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:443)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:547)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:621)
> 2016-04-07 20:30:53,832 INFO 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/dfs/dn, 
> DS-89b72832-2a8c-48f3-8235-48e6c5eb5ab3) exiting.
> {noformat}
> I think the NPE comes from the volume variable in the following code snippet. 
> Somehow the volume scanner know the volume, but the datanode can not lookup 
> the volume using the block.
> {code}
> public void reportBadBlocks(ExtendedBlock block) throws IOException{
> BPOfferService bpos = getBPOSForBlock(block);
> FsVolumeSpi volume = getFSDataset().getVolume(block);
> bpos.reportBadBlocks(
> block, volume.getStorageID(), volume.getStorageType());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10512) VolumeScanner may terminate to due NPE in DataNode.reportBadBlocks

2016-06-14 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330989#comment-15330989
 ] 

Yiqun Lin commented on HDFS-10512:
--

The failed test {{TestPendingInvalidateBlock }} was tracked by HDFS-10426, 
other failed unit tests are not related, thanks for review.

> VolumeScanner may terminate to due NPE in DataNode.reportBadBlocks
> --
>
> Key: HDFS-10512
> URL: https://issues.apache.org/jira/browse/HDFS-10512
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
> Attachments: HDFS-10512.001.patch, HDFS-10512.002.patch
>
>
> VolumeScanner may terminate due to unexpected NullPointerException thrown in 
> {{DataNode.reportBadBlocks()}}. This is different from HDFS-8850/HDFS-9190
> I observed this bug in a production CDH 5.5.1 cluster and the same bug still 
> persist in upstream trunk.
> {noformat}
> 2016-04-07 20:30:53,830 WARN 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> BP-1800173197-10.204.68.5-125156296:blk_1170134484_96468685 on /dfs/dn
> 2016-04-07 20:30:53,831 ERROR 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/dfs/dn, 
> DS-89b72832-2a8c-48f3-8235-48e6c5eb5ab3) exiting because of exception
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.reportBadBlocks(DataNode.java:1018)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner$ScanResultHandler.handle(VolumeScanner.java:287)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:443)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:547)
> at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:621)
> 2016-04-07 20:30:53,832 INFO 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/dfs/dn, 
> DS-89b72832-2a8c-48f3-8235-48e6c5eb5ab3) exiting.
> {noformat}
> I think the NPE comes from the volume variable in the following code snippet. 
> Somehow the volume scanner know the volume, but the datanode can not lookup 
> the volume using the block.
> {code}
> public void reportBadBlocks(ExtendedBlock block) throws IOException{
> BPOfferService bpos = getBPOSForBlock(block);
> FsVolumeSpi volume = getFSDataset().getVolume(block);
> bpos.reportBadBlocks(
> block, volume.getStorageID(), volume.getStorageType());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1312) Re-balance disks within a Datanode

2016-06-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-1312:
---
Attachment: Architecture_and_test_update.pdf

Update to disk balancer arch and test plan

> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Attachments: Architecture_and_test_update.pdf, 
> Architecture_and_testplan.pdf, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are 
> added/replaced in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is 
> not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10527) libhdfs++: hdfsGetBlockLocations doesn't null terminate ip address strings

2016-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330895#comment-15330895
 ] 

Hadoop QA commented on HDFS-10527:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
20s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 56s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 0s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 45s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 41s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 7s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810677/HDFS-10527.HDFS-8707.000.patch
 |
| JIRA Issue | HDFS-10527 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux a4f841f25f82 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 7c1d5df |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15773/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15773/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: hdfsGetBlockLocations doesn't null terminate ip address strings
> --
>
> Key: HDFS-10527
> URL: https://issues.apache.or

[jira] [Commented] (HDFS-9922) Upgrade Domain placement policy status marks a good block in violation when there are decommissioned nodes

2016-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330899#comment-15330899
 ] 

Hadoop QA commented on HDFS-9922:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 10 
new + 9 unchanged - 2 fixed = 19 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 25s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 52s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810679/HDFS-9922-trunk-v2.patch
 |
| JIRA Issue | HDFS-9922 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 08289f722c13 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c77a109 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15772/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15772/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15772/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15772/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15772/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.

[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-14 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330890#comment-15330890
 ] 

Duo Zhang commented on HDFS-9924:
-

My concern is that, you can not tell people that hive is only compatible with 
hadoop-2.8.x, right?
For example, we set hbase to be compatible with hadoop-2.4+, so usually we will 
optimize for all hadoop-2.4+ versions if possible instead of using a new 
feature only introduced in a newer version.

Here, a thread pool solution works for all hadoop-2.x versions. And it is not 
that terrible to have 1MB stack size per thread... It is offheap, only 
increases 1MB VSZ, not RSS, RSS will increase on demand. And you can set a 
smaller stack size if you like to reduce the overhead.

For the implementation, what [~stack] said above is the experience we got from 
our write-ahead-log implementation. And for the hive case here, yes, you have a 
different pattern. But it is not a good idea to wait on Futures sequentially. 
For example, you have request 0-99, and request 1 is blocked for a long time 
and request 2-99 are all failed. With your solution, you will block on request 
1 for a long time before resubmit the failed 2-99 request. This is a inherent 
defect of lacking the support of callback. And a better solution is, sorry, but 
again, using multiple threads. With a thread pool and {{CompletionService}}, 
you can (sometimes) get the failed request first.

Hope this could help. Thanks.

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10520) DiskBalancer: Fix Checkstyle issues in test code

2016-06-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10520:

Status: Patch Available  (was: Open)

> DiskBalancer: Fix Checkstyle issues in test code
> 
>
> Key: HDFS-10520
> URL: https://issues.apache.org/jira/browse/HDFS-10520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10520-HDFS-1312.001.patch
>
>
> Most of the test code in HDFS-1312 went in when we did not have checkstyle 
> enabled for tests. But checkstyle is enabled on trunk now and when we merge 
> this will create lot of messages. This patch cleans up important checkstyle 
> issues like missing JavaDoc etc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10520) DiskBalancer: Fix Checkstyle issues in test code

2016-06-14 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10520:

Status: Open  (was: Patch Available)

> DiskBalancer: Fix Checkstyle issues in test code
> 
>
> Key: HDFS-10520
> URL: https://issues.apache.org/jira/browse/HDFS-10520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10520-HDFS-1312.001.patch
>
>
> Most of the test code in HDFS-1312 went in when we did not have checkstyle 
> enabled for tests. But checkstyle is enabled on trunk now and when we merge 
> this will create lot of messages. This patch cleans up important checkstyle 
> issues like missing JavaDoc etc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330867#comment-15330867
 ] 

stack commented on HDFS-9924:
-

I see. Thank you. I see what you want now.

You just need renames or you need more than rename? You want to do thousands of 
concurrent renames this way? Is that even going to work? Are you going to knock 
over the NN? Or, aren't you just have a bunch of outstanding calls blocked on 
remote NN locks? Won't you want to constrict how many ongoing calls there are?

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10520) DiskBalancer: Fix Checkstyle issues in test code

2016-06-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330858#comment-15330858
 ] 

Arpit Agarwal commented on HDFS-10520:
--

+1 pending Jenkins.

> DiskBalancer: Fix Checkstyle issues in test code
> 
>
> Key: HDFS-10520
> URL: https://issues.apache.org/jira/browse/HDFS-10520
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10520-HDFS-1312.001.patch
>
>
> Most of the test code in HDFS-1312 went in when we did not have checkstyle 
> enabled for tests. But checkstyle is enabled on trunk now and when we merge 
> this will create lot of messages. This patch cleans up important checkstyle 
> issues like missing JavaDoc etc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10525) Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap

2016-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330845#comment-15330845
 ] 

Hadoop QA commented on HDFS-10525:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 69m 9s 
{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 41s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810590/HDFS-10525.02.patch |
| JIRA Issue | HDFS-10525 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dbc32f751c5a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8e8cb4c |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15768/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15768/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap
> ---
>
> Key: HDFS-10525
> URL: https://issues.apache.org/jira/browse/HDFS-10525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10525.01.patch, HDFS-10525.02.patch
>
>




--
This message was sent by

[jira] [Commented] (HDFS-10528) Add logging to successful standby checkpointing

2016-06-14 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330846#comment-15330846
 ] 

Xiaoyu Yao commented on HDFS-10528:
---

Plan to add a log entry after {{ lastCheckpointTime = now;}}.

> Add logging to successful standby checkpointing
> ---
>
> Key: HDFS-10528
> URL: https://issues.apache.org/jira/browse/HDFS-10528
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> This ticket is opened to add INFO log for a successful standby checkpointing 
> in the code below for troubleshooting.
> {code}
> if (needCheckpoint) {
> doCheckpoint();
> // reset needRollbackCheckpoint to false only when we finish a 
> ckpt
> // for rollback image
> if (needRollbackCheckpoint
> && namesystem.getFSImage().hasRollbackFSImage()) {
>   namesystem.setCreatedRollbackImages(true);
>   namesystem.setNeedRollbackFsImage(false);
> }
> lastCheckpointTime = now;
>   }
> } catch (SaveNamespaceCancelledException ce) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7597) DNs should not open new NN connections when webhdfs clients seek

2016-06-14 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330824#comment-15330824
 ] 

Xiao Chen commented on HDFS-7597:
-

I tried the patch on latest trunk, it still applies. Also tried to run the May 
23 failed tests a couple of times, cannot reproduce. (Too old to see what 
exactly was failing in those tests)

Should we trigger a new jenkins run and commit this patch? Thanks.

> DNs should not open new NN connections when webhdfs clients seek
> 
>
> Key: HDFS-7597
> URL: https://issues.apache.org/jira/browse/HDFS-7597
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7597.patch, HDFS-7597.patch, HDFS-7597.patch
>
>
> Webhdfs seeks involve closing the current connection, and reissuing a new 
> open request with the new offset.  The RPC layer caches connections so the DN 
> keeps a lingering connection open to the NN.  Connection caching is in part 
> based on UGI.  Although the client used the same token for the new offset 
> request, the UGI is different which forces the DN to open another unnecessary 
> connection to the NN.
> A job that performs many seeks will easily crash the NN due to fd exhaustion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10528) Add logging to successful standby checkpointing

2016-06-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-10528:
-

 Summary: Add logging to successful standby checkpointing
 Key: HDFS-10528
 URL: https://issues.apache.org/jira/browse/HDFS-10528
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This ticket is opened to add INFO log for a successful standby checkpointing in 
the code below for troubleshooting.

{code}
if (needCheckpoint) {
doCheckpoint();
// reset needRollbackCheckpoint to false only when we finish a ckpt
// for rollback image
if (needRollbackCheckpoint
&& namesystem.getFSImage().hasRollbackFSImage()) {
  namesystem.setCreatedRollbackImages(true);
  namesystem.setNeedRollbackFsImage(false);
}
lastCheckpointTime = now;
  }
} catch (SaveNamespaceCancelledException ce) {
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9016) Display upgrade domain information in fsck

2016-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330835#comment-15330835
 ] 

Hadoop QA commented on HDFS-9016:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 0 new + 410 unchanged - 22 fixed = 410 total (was 432) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 15s 
{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 13s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810597/HDFS-9016-4.patch |
| JIRA Issue | HDFS-9016 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9366672afb68 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8e8cb4c |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15769/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15769/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Display upgrade domain information in fsck
> --
>
> Key: HDFS-9016
> URL: https://issues.apache.org/jira/browse/HDFS-9016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9016-2.patch, HDFS-9016-3.patch, HDFS-9016-4.patch, 
> HDFS-9016-4.patch, HDFS-9016-branch-2-2.patch, HDFS-9016-branch-2.patch, 
> HDFS-9016.patch
>
>
> This will make it easy for people to use fsck to

[jira] [Updated] (HDFS-9922) Upgrade Domain placement policy status marks a good block in violation when there are decommissioned nodes

2016-06-14 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated HDFS-9922:
---
Attachment: HDFS-9922-trunk-v3.patch

V3 attached.
# Fixed import issues in TestUpgradeDomainBlockPlacementPolicy.
# Removed extra live node filter in BlockPlacementPolicyWithUpgradeDomain.

> Upgrade Domain placement policy status marks a good block in violation when 
> there are decommissioned nodes
> --
>
> Key: HDFS-9922
> URL: https://issues.apache.org/jira/browse/HDFS-9922
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Attachments: HDFS-9922-trunk-v1.patch, HDFS-9922-trunk-v2.patch, 
> HDFS-9922-trunk-v3.patch
>
>
> When there are replicas of a block on a decommissioned node, 
> BlockPlacementStatusWithUpgradeDomain#isUpgradeDomainPolicySatisfied returns 
> false when it should return true. This is because numberOfReplicas is the 
> number of in-service replicas for the block and upgradeDomains.size() is the 
> number of upgrade domains across all replicas of the block. Specifically, we 
> hit this scenario when numberOfReplicas is equal to upgradeDomainFactor and 
> upgradeDomains.size() is greater than numberOfReplicas.
> {code}
> private boolean isUpgradeDomainPolicySatisfied() {
> if (numberOfReplicas <= upgradeDomainFactor) {
>   return (numberOfReplicas == upgradeDomains.size());
> } else {
>   return upgradeDomains.size() >= upgradeDomainFactor;
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-14 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330715#comment-15330715
 ] 

Xiaobing Zhou edited comment on HDFS-9924 at 6/14/16 10:12 PM:
---

bq. In my experience, a bunch of rigging (threads) for polling, rather than 
notification, is required when all you have is a Future to work with.

This is not true. There's no need to do threaded polling. You can look at 
TestAsyncDFSRename#testConcurrentAsyncRename on how to simply use it, e.g.: 
{code}
Map> retFutures = new HashMap>();
for (int i = 0; i < NUM_TESTS; i++) {
  for (;;) {
try {
  Future retFuture = adfs.rename(srcs[i], dsts[i], 
Rename.OVERWRITE);
  retFutures.put(i, retFuture);
  break;
} catch (AsyncCallLimitExceededException e) {
  /**
   * reached limit of async calls, fetch results of finished async calls
   * to let follow-on calls go
   */
  start = end;
  end = i;
  waitForReturnValues(retFutures, start, end);
}
  }
}
waitForReturnValues(retFutures, end, NUM_TESTS);

  void waitForReturnValues(final Map> retFutures,
  final int start, final int end) throws InterruptedException, 
ExecutionException {
for (int i = start; i < end; i++) {
  retFutures.get(i).get();
}
  }
{code}


was (Author: xiaobingo):
bq. In my experience, a bunch of rigging (threads) for polling, rather than 
notification, is required when all you have is a Future to work with.

This is not true. There's no need to do threaded polling. You can look at 
TestDiskBalancerCommand#testConcurrentAsyncRename on how to simply use it, 
e.g.: 
{code}
Map> retFutures = new HashMap>();
for (int i = 0; i < NUM_TESTS; i++) {
  for (;;) {
try {
  Future retFuture = adfs.rename(srcs[i], dsts[i], 
Rename.OVERWRITE);
  retFutures.put(i, retFuture);
  break;
} catch (AsyncCallLimitExceededException e) {
  /**
   * reached limit of async calls, fetch results of finished async calls
   * to let follow-on calls go
   */
  start = end;
  end = i;
  waitForReturnValues(retFutures, start, end);
}
  }
}
waitForReturnValues(retFutures, end, NUM_TESTS);

  void waitForReturnValues(final Map> retFutures,
  final int start, final int end) throws InterruptedException, 
ExecutionException {
for (int i = start; i < end; i++) {
  retFutures.get(i).get();
}
  }
{code}

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9016) Display upgrade domain information in fsck

2016-06-14 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330737#comment-15330737
 ] 

Lei (Eddy) Xu commented on HDFS-9016:
-

+1, pending jenkins.

Thanks Ming!

> Display upgrade domain information in fsck
> --
>
> Key: HDFS-9016
> URL: https://issues.apache.org/jira/browse/HDFS-9016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9016-2.patch, HDFS-9016-3.patch, HDFS-9016-4.patch, 
> HDFS-9016-4.patch, HDFS-9016-branch-2-2.patch, HDFS-9016-branch-2.patch, 
> HDFS-9016.patch
>
>
> This will make it easy for people to use fsck to check block placement when 
> upgrade domain is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-14 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330715#comment-15330715
 ] 

Xiaobing Zhou edited comment on HDFS-9924 at 6/14/16 10:01 PM:
---

bq. In my experience, a bunch of rigging (threads) for polling, rather than 
notification, is required when all you have is a Future to work with.

This is not true. There's no need to do threaded polling. You can look at 
TestDiskBalancerCommand#testConcurrentAsyncRename on how to simply use it, 
e.g.: 
{code}
Map> retFutures = new HashMap>();
for (int i = 0; i < NUM_TESTS; i++) {
  for (;;) {
try {
  Future retFuture = adfs.rename(srcs[i], dsts[i], 
Rename.OVERWRITE);
  retFutures.put(i, retFuture);
  break;
} catch (AsyncCallLimitExceededException e) {
  /**
   * reached limit of async calls, fetch results of finished async calls
   * to let follow-on calls go
   */
  start = end;
  end = i;
  waitForReturnValues(retFutures, start, end);
}
  }
}
waitForReturnValues(retFutures, end, NUM_TESTS);

  void waitForReturnValues(final Map> retFutures,
  final int start, final int end) throws InterruptedException, 
ExecutionException {
for (int i = start; i < end; i++) {
  retFutures.get(i).get();
}
  }
{code}


was (Author: xiaobingo):
bq. In my experience, a bunch of rigging (threads) for polling, rather than 
notification, is required when all you have is a Future to work with.

This is not true. There's no need to do threaded polling. You can look at 
TestDiskBalancerCommand#testConcurrentAsyncRename on how to simply use it, 
e.g.: 
{code}
Map> retFutures = new HashMap>();
for (int i = 0; i < NUM_TESTS; i++) {
  for (;;) {
try {
  Future retFuture = adfs.rename(srcs[i], dsts[i], 
Rename.OVERWRITE);
  retFutures.put(i, retFuture);
  break;
} catch (AsyncCallLimitExceededException e) {
  /**
   * reached limit of async calls, fetch results of finished async calls
   * to let follow-on calls go
   */
  start = end;
  end = i;
  waitForReturnValues(retFutures, start, end);
}
  }
}
waitForReturnValues(retFutures, end, NUM_TESTS);

  void waitForReturnValues(final Map> retFutures,
  final int start, final int end) throws InterruptedException, 
ExecutionException {
for (int i = start; i < end; i++) {
  retFutures.get(i).get();
}
  }
{code}

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-14 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330715#comment-15330715
 ] 

Xiaobing Zhou commented on HDFS-9924:
-

bq. In my experience, a bunch of rigging (threads) for polling, rather than 
notification, is required when all you have is a Future to work with.

This is not true. There's no need to do threaded polling. You can look at 
TestDiskBalancerCommand#testConcurrentAsyncRename on how to simply use it, 
e.g.: 
{code}
Map> retFutures = new HashMap>();
for (int i = 0; i < NUM_TESTS; i++) {
  for (;;) {
try {
  Future retFuture = adfs.rename(srcs[i], dsts[i], 
Rename.OVERWRITE);
  retFutures.put(i, retFuture);
  break;
} catch (AsyncCallLimitExceededException e) {
  /**
   * reached limit of async calls, fetch results of finished async calls
   * to let follow-on calls go
   */
  start = end;
  end = i;
  waitForReturnValues(retFutures, start, end);
}
  }
}
waitForReturnValues(retFutures, end, NUM_TESTS);

  void waitForReturnValues(final Map> retFutures,
  final int start, final int end) throws InterruptedException, 
ExecutionException {
for (int i = start; i < end; i++) {
  retFutures.get(i).get();
}
  }
{code}

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10524) libhdfs++: Implement chmod and chown

2016-06-14 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330670#comment-15330670
 ] 

James Clampffer commented on HDFS-10524:


Just a couple minor things, everything looks solid functionally.

Do we need to use std::tuple in FileSystemImpl::SetPermission and SetOwner to 
make 1-tuples?  They'll likely get inlined away but less templates generally 
makes things easier to debug.
{code}
auto callstate = std::make_shared>>();
...
callstate->set_value(std::make_tuple(s));
...
auto returnstate = future.get();
Status stat = std::get<0>(returnstate);
{code}

It looks like there is some duplicate error checking code.  
FileSystemImpl::SetPermissions will check the path and permission mask and then 
call NameNodeOperations::SetPermission which does the same.  Do we plan on 
making the NameNodeOperations object accessable outside of FileSystem(Impl)?  
If not it might be worth getting rid of the checks in one of them, or better 
yet factoring out the checking code into a function and using that in both 
places.







> libhdfs++: Implement chmod and chown
> 
>
> Key: HDFS-10524
> URL: https://issues.apache.org/jira/browse/HDFS-10524
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10524.HDFS-8707.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-06-14 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330667#comment-15330667
 ] 

Konstantin Shvachko commented on HDFS-10301:


Colin, you seem to imply that I ignored some of your questions. I don't see 
which. Could you please formulate your question so that I could answer it, if 
you have any.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.01.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-14 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330655#comment-15330655
 ] 

Ashutosh Chauhan commented on HDFS-9924:


We can use simple Future by making lots of calls in the loop and collect an 
array of Futures and then call #Future.get in a loop. This simple usage solves 
our problem with making synchronous calls and wait for full roundtrip latency 
of each call. Threadpool has high overhead. Each thread needs 1MB memory, and 
1000 threads need 1GB which is non-trivial. Additionally, callback is not 
needed for Hive use case.

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9922) Upgrade Domain placement policy status marks a good block in violation when there are decommissioned nodes

2016-06-14 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330646#comment-15330646
 ] 

Chris Trezzo commented on HDFS-9922:


Also, thank you [~mingma] for the review and help on unit tests!

> Upgrade Domain placement policy status marks a good block in violation when 
> there are decommissioned nodes
> --
>
> Key: HDFS-9922
> URL: https://issues.apache.org/jira/browse/HDFS-9922
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Attachments: HDFS-9922-trunk-v1.patch, HDFS-9922-trunk-v2.patch
>
>
> When there are replicas of a block on a decommissioned node, 
> BlockPlacementStatusWithUpgradeDomain#isUpgradeDomainPolicySatisfied returns 
> false when it should return true. This is because numberOfReplicas is the 
> number of in-service replicas for the block and upgradeDomains.size() is the 
> number of upgrade domains across all replicas of the block. Specifically, we 
> hit this scenario when numberOfReplicas is equal to upgradeDomainFactor and 
> upgradeDomains.size() is greater than numberOfReplicas.
> {code}
> private boolean isUpgradeDomainPolicySatisfied() {
> if (numberOfReplicas <= upgradeDomainFactor) {
>   return (numberOfReplicas == upgradeDomains.size());
> } else {
>   return upgradeDomains.size() >= upgradeDomainFactor;
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9922) Upgrade Domain placement policy status marks a good block in violation when there are decommissioned nodes

2016-06-14 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated HDFS-9922:
---
Attachment: HDFS-9922-trunk-v2.patch

V2 attached.
# Spoke with [~mingma] offline and will not modify the name of 
{{numOfReplicas}} since this might not be the replication factor when erasure 
encoding is in use.
# Modified the condition in BlockPlacementStatusWithUpgradeDomain.
# Added more tests in TestUpgradeDomainBlockPlacementPolicy.

> Upgrade Domain placement policy status marks a good block in violation when 
> there are decommissioned nodes
> --
>
> Key: HDFS-9922
> URL: https://issues.apache.org/jira/browse/HDFS-9922
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Attachments: HDFS-9922-trunk-v1.patch, HDFS-9922-trunk-v2.patch
>
>
> When there are replicas of a block on a decommissioned node, 
> BlockPlacementStatusWithUpgradeDomain#isUpgradeDomainPolicySatisfied returns 
> false when it should return true. This is because numberOfReplicas is the 
> number of in-service replicas for the block and upgradeDomains.size() is the 
> number of upgrade domains across all replicas of the block. Specifically, we 
> hit this scenario when numberOfReplicas is equal to upgradeDomainFactor and 
> upgradeDomains.size() is greater than numberOfReplicas.
> {code}
> private boolean isUpgradeDomainPolicySatisfied() {
> if (numberOfReplicas <= upgradeDomainFactor) {
>   return (numberOfReplicas == upgradeDomains.size());
> } else {
>   return upgradeDomains.size() >= upgradeDomainFactor;
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-14 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330637#comment-15330637
 ] 

Uma Maheswara Rao G commented on HDFS-10473:


{quote}
We also need to consider other scenarios such as the NameNode tries to choose 
extra datanodes to reconstruct EC blocks with missing internal blocks (i.e., 
{{ErasureCodingWork#chooseTargets}}). Maybe we can consider adding the extra 
check introduced in the current patch directly in 
{{INodeFile#getStoragePolicyID}}?
{quote}
Good point [~jingzhao]. Let me think to update patch for covering this scenario.
Thanks a lot Zhe for reviews.
[~arpitagarwal] Thanks. Sure.

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch, HDFS-10473-02.patch, 
> HDFS-10473-03.patch
>
>
> Currently some of existing storage policies are not suitable for striped 
> layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10527) libhdfs++: hdfsGetBlockLocations doesn't null terminate ip address strings

2016-06-14 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-10527:
---
Status: Patch Available  (was: Open)

> libhdfs++: hdfsGetBlockLocations doesn't null terminate ip address strings
> --
>
> Key: HDFS-10527
> URL: https://issues.apache.org/jira/browse/HDFS-10527
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10527.HDFS-8707.000.patch
>
>
> Code looks like this
> {code}
> strncpy(buf, ipaddr.c_str(),ipaddr.size());
> {code}
> But should be
> {code}
> strncpy(buf, ipaddr.c_str(),ipaddr.size()+1);
> {code}
> In order to make sure there is at least 1 null terminating byte.  If we could 
> run the minidfscluster an another process and run valgrind on the libhdfs++ 
> tests this would show up really quickly as a sequence of invalid reads when 
> that const char* was passed to std::string::string(const char*), strlen, or 
> strcpy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10527) libhdfs++: hdfsGetBlockLocations doesn't null terminate ip address strings

2016-06-14 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-10527:
---
Attachment: HDFS-10527.HDFS-8707.000.patch

Attached fix.  Not much to it.

> libhdfs++: hdfsGetBlockLocations doesn't null terminate ip address strings
> --
>
> Key: HDFS-10527
> URL: https://issues.apache.org/jira/browse/HDFS-10527
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10527.HDFS-8707.000.patch
>
>
> Code looks like this
> {code}
> strncpy(buf, ipaddr.c_str(),ipaddr.size());
> {code}
> But should be
> {code}
> strncpy(buf, ipaddr.c_str(),ipaddr.size()+1);
> {code}
> In order to make sure there is at least 1 null terminating byte.  If we could 
> run the minidfscluster an another process and run valgrind on the libhdfs++ 
> tests this would show up really quickly as a sequence of invalid reads when 
> that const char* was passed to std::string::string(const char*), strlen, or 
> strcpy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9758) libhdfs++: Implement Python bindings

2016-06-14 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330602#comment-15330602
 ] 

James Clampffer commented on HDFS-9758:
---

Sorry, didn't see this until now.  Just a work in progress, it looks like 
[~anatoli.shein] is getting a lot of the remaining RPC calls finished up.

> libhdfs++: Implement Python bindings
> 
>
> Key: HDFS-9758
> URL: https://issues.apache.org/jira/browse/HDFS-9758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Tibor Kiss
> Attachments: hdfs_posix.py
>
>
> It'd be really useful to have bindings for various scripting languages.  
> Python would be a good start because of it's popularity and how easy it is to 
> interact with shared libraries using the ctypes module.  I think bindings for 
> the V8 engine that nodeJS uses would be a close second in terms of expanding 
> the potential user base.
> Probably worth starting with just adding a synchronous API and building from 
> there to avoid interactions with python's garbage collector until the 
> bindings prove to be solid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10511) libhdfs++: make error returning mechanism consistent across all hdfs operations

2016-06-14 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330598#comment-15330598
 ] 

Bob Hansen commented on HDFS-10511:
---

Thanks, [~anatoli.shein].

A few more little places we should touch for consistency:

Let's have hdfsGetLastError return -1 or 0 also.
The hdfs*Builder* and hdfs*Conf* functions should set errno to 0 on entry.
hdfsBuilderConfGetInt should set errno to 0 on success and set errno and return 
-1 on failure.
hdfs*Logging* should  set errno to 0 on success and set errno and return -1 on 
failure.




> libhdfs++: make error returning mechanism consistent across all hdfs 
> operations
> ---
>
> Key: HDFS-10511
> URL: https://issues.apache.org/jira/browse/HDFS-10511
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10511.HDFS-8707.000.patch, 
> HDFS-10511.HDFS-8707.000.patch
>
>
> Errno should always be set.
> If function is returning a code on stack, it should be consistent with errno.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-14 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330596#comment-15330596
 ] 

Zhe Zhang commented on HDFS-10473:
--

Sure!

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch, HDFS-10473-02.patch, 
> HDFS-10473-03.patch
>
>
> Currently some of existing storage policies are not suitable for striped 
> layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-14 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330595#comment-15330595
 ] 

Jing Zhao commented on HDFS-10473:
--

Thanks for updating the patch, Uma. And thanks for the review, Zhe.

Only updating {{validateAddBlock}} may not be enough. We also need to consider 
other scenarios such as the NameNode tries to choose extra datanodes to 
reconstruct EC blocks with missing internal blocks (i.e., 
{{ErasureCodingWork#chooseTargets}}). Maybe we can consider adding the extra 
check introduced in the current patch directly in 
{{INodeFile#getStoragePolicyID}}?

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch, HDFS-10473-02.patch, 
> HDFS-10473-03.patch
>
>
> Currently some of existing storage policies are not suitable for striped 
> layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10515) libhdfs++: Implement mkdirs, rmdir, rename, and remove

2016-06-14 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-10515:
-
Attachment: HDFS-10515.HDFS-8707.001.patch

Patch with fixed 1 files test

> libhdfs++: Implement mkdirs, rmdir, rename, and remove
> --
>
> Key: HDFS-10515
> URL: https://issues.apache.org/jira/browse/HDFS-10515
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10515.HDFS-8707.000.patch, 
> HDFS-10515.HDFS-8707.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330587#comment-15330587
 ] 

Arpit Agarwal commented on HDFS-10473:
--

Hi [~zhz], [~umamaheswararao],

Can you please hold off committing this patch? I would like to take a look at 
it by tomorrow.

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch, HDFS-10473-02.patch, 
> HDFS-10473-03.patch
>
>
> Currently some of existing storage policies are not suitable for striped 
> layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10441) libhdfs++: HA namenode support

2016-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330585#comment-15330585
 ] 

Hadoop QA commented on HDFS-10441:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
35s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 26s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 26s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 7s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 12s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 16s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810598/HDFS-10441.HDFS-8707.004.patch
 |
| JIRA Issue | HDFS-10441 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 8e41cacf59bc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 7c1d5df |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15767/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15767/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-tas

[jira] [Resolved] (HDFS-8715) Checkpoint node keeps throwing exception

2016-06-14 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HDFS-8715.
---
Resolution: Duplicate

> Checkpoint node keeps throwing exception
> 
>
> Key: HDFS-8715
> URL: https://issues.apache.org/jira/browse/HDFS-8715
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.5.2
> Environment: centos 6.4, sun jdk 1.7
>Reporter: Jiahongchao
>
> I tired to start a checkup node using "bin/hdfs namenode -checkpoint", but it 
> keeps printing
> 15/07/03 23:16:22 ERROR namenode.FSNamesystem: Swallowing exception in 
> NameNodeEditLogRoller:
> java.lang.IllegalStateException: Bad state: BETWEEN_LOG_SEGMENTS
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:172)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.getCurSegmentTxId(FSEditLog.java:495)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:4718)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10524) libhdfs++: Implement chmod and chown

2016-06-14 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-10524:
-
Attachment: HDFS-10524.HDFS-8707.000.patch

Patch is attached. Please review.

> libhdfs++: Implement chmod and chown
> 
>
> Key: HDFS-10524
> URL: https://issues.apache.org/jira/browse/HDFS-10524
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10524.HDFS-8707.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10524) libhdfs++: Implement chmod and chown

2016-06-14 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-10524:
-
Status: Patch Available  (was: Open)

> libhdfs++: Implement chmod and chown
> 
>
> Key: HDFS-10524
> URL: https://issues.apache.org/jira/browse/HDFS-10524
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10525) Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap

2016-06-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330503#comment-15330503
 ] 

Colin Patrick McCabe commented on HDFS-10525:
-

+1.  Thanks, [~xiaochen].

> Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap
> ---
>
> Key: HDFS-10525
> URL: https://issues.apache.org/jira/browse/HDFS-10525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10525.01.patch, HDFS-10525.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10505) OIV's ReverseXML processor should support ACLs

2016-06-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330492#comment-15330492
 ] 

Colin Patrick McCabe commented on HDFS-10505:
-

Thanks for this, [~surendrasingh].  It's good to see progress on supporting 
ACLs here!

I am confused by the changes for setting {{latestStringId}} to 1, or 
special-casing {{null}} in {{registerStringId}}.  If we are going to do 
"magical" things with special indexes in the string table, we need to document 
it somewhere.  Actually, though, I would prefer to simply handle it without the 
magic.  We know that a null entry for an ACL name simply means that the name 
was an empty string.  You can see that in {{AclEntry.java}}:
{code}
  String name = split[index];
  if (!name.isEmpty()) {
builder.setName(name);
  }
{code}

In ReverseXML, we should simply translate these {{null}} ACL names back into 
empty strings, and then the existing logic for handling the string table would 
work, with no magic.  We also need a test case which has null ACL names, so 
that this code is being exercised.

> OIV's ReverseXML processor should support ACLs
> --
>
> Key: HDFS-10505
> URL: https://issues.apache.org/jira/browse/HDFS-10505
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-10505-001.patch
>
>
> OIV's ReverseXML processor should support ACLs.  Currently ACLs show up in 
> the fsimage.xml file, but we don't reconstruct them with ReverseXML.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-9650) Problem is logging of "Redundant addStoredBlock request received"

2016-06-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDFS-9650:


Assignee: Xiaoyu Yao

> Problem is logging of "Redundant addStoredBlock request received"
> -
>
> Key: HDFS-9650
> URL: https://issues.apache.org/jira/browse/HDFS-9650
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Frode Halvorsen
>Assignee: Xiaoyu Yao
>
> Description;
> Hadoop 2.7.1. 2 namenodes in HA. 14 Datanodes.
> Enough CPU,disk and RAM.
> Just discovered that some datanodes must have been corrupted somehow.
> When restarting  a 'defect' ( works without failure except when restarting) 
> the active namenode suddenly is logging a lot of : "Redundant addStoredBlock 
> request received"
> and finally the failover-controller takes the namenode down, fails over to 
> other node. This node also starts logging the same, and as soon as the fisrt 
> node is bac online, the failover-controller again kill the active node, and 
> does failover.
> This node now was started after the datanode, and doesn't log "Redundant 
> addStoredBlock request received" anymore, and restart of the second name-node 
> works fine.
> If I again restarts the datanode- the process repeats itself.
> Problem is logging of "Redundant addStoredBlock request received" and why 
> does it happen ? 
> The failover-controller acts the same way as it did on 2.5/6 when we had a 
> lot of 'block does not belong to any replica'-messages. Namenode is too busy 
> to respond to heartbeats, and is taken down...
> To resolve this, I have to take down the datanode, delete all data from it, 
> and start it up. Then cluster will reproduce the missing blocks, and the 
> failing datanode is working fine again...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10441) libhdfs++: HA namenode support

2016-06-14 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-10441:
---
Attachment: HDFS-10441.HDFS-8707.004.patch

Rebase onto current HDFS-8707 + fix for HDFS-10527 (2 liner).

> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch, 
> HDFS-10441.HDFS-8707.002.patch, HDFS-10441.HDFS-8707.003.patch, 
> HDFS-10441.HDFS-8707.004.patch, HDFS-8707.HDFS-10441.001.patch
>
>
> If a cluster is HA enabled then do proper failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10527) libhdfs++: hdfsGetBlockLocations doesn't null terminate ip address strings

2016-06-14 Thread James Clampffer (JIRA)
James Clampffer created HDFS-10527:
--

 Summary: libhdfs++: hdfsGetBlockLocations doesn't null terminate 
ip address strings
 Key: HDFS-10527
 URL: https://issues.apache.org/jira/browse/HDFS-10527
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


Code looks like this
{code}
strncpy(buf, ipaddr.c_str(),ipaddr.size());
{code}

But should be
{code}
strncpy(buf, ipaddr.c_str(),ipaddr.size()+1);
{code}

In order to make sure there is at least 1 null terminating byte.  If we could 
run the minidfscluster an another process and run valgrind on the libhdfs++ 
tests this would show up really quickly as a sequence of invalid reads when 
that const char* was passed to std::string::string(const char*), strlen, or 
strcpy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9016) Display upgrade domain information in fsck

2016-06-14 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9016:
--
Attachment: HDFS-9016-4.patch

Thanks Allen. Reload trunk patch.

> Display upgrade domain information in fsck
> --
>
> Key: HDFS-9016
> URL: https://issues.apache.org/jira/browse/HDFS-9016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9016-2.patch, HDFS-9016-3.patch, HDFS-9016-4.patch, 
> HDFS-9016-4.patch, HDFS-9016-branch-2-2.patch, HDFS-9016-branch-2.patch, 
> HDFS-9016.patch
>
>
> This will make it easy for people to use fsck to check block placement when 
> upgrade domain is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10441) libhdfs++: HA namenode support

2016-06-14 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer reassigned HDFS-10441:
--

Assignee: James Clampffer  (was: Bob Hansen)

> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch, 
> HDFS-10441.HDFS-8707.002.patch, HDFS-10441.HDFS-8707.003.patch, 
> HDFS-8707.HDFS-10441.001.patch
>
>
> If a cluster is HA enabled then do proper failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330173#comment-15330173
 ] 

stack commented on HDFS-9924:
-

bq. There are multiple comments from both sides indicating that 
CompletableFuture is the ideal option for 3.x.

[~arpiagariu] Please leave off concluding a discussion that is still ongoing 
(CF is not 'ideal' and is not a given). It doesn't help sir.

bq. You mean just like we recently added 'avoid local nodes' because another 
downstream component wanted to try it? 

You misrepresent, again. HBase ran for years with a workaround while waiting on 
the behavior to show up in HDFS; i.e. the hbase project did not have an 
'interest' in 'avoid local nodes'; they required this behavior of the 
filesystem and ran with a suboptimal hack until it showed up.

In this case all we have is 'interest' and requests for technical justification 
go unanswered.

bq. The Hive engineers think they can make it work for them and there was a 
compromise proposed to introduce the API as unstable.

I'm interested in how Hive will do async w/ only a Future and in how this 
suboptimal API in particular will solve their issue (is it described 
anywhere?). In my experience, a bunch of rigging (threads) for polling, rather 
than notification, is required when all you have is a Future to work with.




> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-14 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330163#comment-15330163
 ] 

Zhe Zhang commented on HDFS-10473:
--

Thanks Uma for the patch and Jing for the comments. The v3 patch LGTM. +1, and 
let's keep the patch open until end of today in case Jing wants to take a look.

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch, HDFS-10473-02.patch, 
> HDFS-10473-03.patch
>
>
> Currently some of existing storage policies are not suitable for striped 
> layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-14 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10473:
-
Hadoop Flags: Reviewed

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch, HDFS-10473-02.patch, 
> HDFS-10473-03.patch
>
>
> Currently some of existing storage policies are not suitable for striped 
> layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10525) Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap

2016-06-14 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330153#comment-15330153
 ] 

Xiao Chen edited comment on HDFS-10525 at 6/14/16 6:52 PM:
---

Thanks [~cmccabe], good idea to have more logging to track this. Patch 2 added 
the log. Truly sorry for missing this check in HDFS-9549.


was (Author: xiaochen):
Thanks [~cmccabe], good idea to have more logging to track this. Patch added 
the log. Truly sorry for missing this check in HDFS-9549.

> Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap
> ---
>
> Key: HDFS-10525
> URL: https://issues.apache.org/jira/browse/HDFS-10525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10525.01.patch, HDFS-10525.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10525) Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap

2016-06-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10525:
-
Attachment: HDFS-10525.02.patch

Thanks [~cmccabe], good idea to have more logging to track this. Patch added 
the log. Truly sorry for missing this check in HDFS-9549.

> Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap
> ---
>
> Key: HDFS-10525
> URL: https://issues.apache.org/jira/browse/HDFS-10525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10525.01.patch, HDFS-10525.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10525) Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap

2016-06-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330133#comment-15330133
 ] 

Colin Patrick McCabe commented on HDFS-10525:
-

Thanks, [~xiaochen].  Can you add a {{LOG.debug}} to the "if" statement that 
talks about the block ID that is getting skipped?

+1 once that's done.

> Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap
> ---
>
> Key: HDFS-10525
> URL: https://issues.apache.org/jira/browse/HDFS-10525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10525.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330056#comment-15330056
 ] 

Arpit Agarwal commented on HDFS-9924:
-

There are multiple comments from both sides indicating that CompletableFuture 
is the ideal option for 3.x.

bq. I'd hope that it takes more than 'interest' to get code committed to HDFS.
You mean just like we recently added 'avoid local nodes' because another 
downstream component wanted to try it? :)

bq. If a technical argument on why Future will fix a codebases's scaling 
problem can't be produced,
[~stack] what kind of argument are you looking for? The Hive engineers think 
they can make it work for them and there was a compromise proposed to introduce 
the API as unstable. So what is the compelling argument against this approach?

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10525) Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap

2016-06-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10525:
-
Attachment: HDFS-10525.01.patch

Patch 1 to fix this in the capacity check.

The check added in HDFS-9549 was to handle a rare case where a block stuck in 
pending if the capacity is reached.

I think there're 2 options on the fix:
- ignore null blockInfo. We already have logic to remove the blocks that cannot 
be found on NN
- remove immediately. I feel this may add a source of confusion since we have 1 
more place to remove. 

To limit the change scope, patch 1 goes with option 1.

> Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap
> ---
>
> Key: HDFS-10525
> URL: https://issues.apache.org/jira/browse/HDFS-10525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10525.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10525) Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap

2016-06-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10525:
-
Status: Patch Available  (was: Open)

> Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap
> ---
>
> Key: HDFS-10525
> URL: https://issues.apache.org/jira/browse/HDFS-10525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10525.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10526) libhdfs++: Add connect timeouts to async_connect calls

2016-06-14 Thread Bob Hansen (JIRA)
Bob Hansen created HDFS-10526:
-

 Summary: libhdfs++: Add connect timeouts to async_connect calls
 Key: HDFS-10526
 URL: https://issues.apache.org/jira/browse/HDFS-10526
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bob Hansen
Assignee: Bob Hansen






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10519) Add a configuration option to enable in-progress edit log tailing

2016-06-14 Thread Jiayi Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330013#comment-15330013
 ] 

Jiayi Zhou commented on HDFS-10519:
---

Maybe I got something wrong, but isn't the MetaRecoveryContext already null at 
the beginning?

In doTailEdits() from EditLogTailer, we have 
  streams = editLog.selectInputStreams(lastTxnId + 
1, 0, null, inProgressTail);

Here, MetaRecoverContext is null. And in image.LoadEdits(), we have
  public long 
loadEdits(Iterable editStreams,
  FSNamesystem target) throws IOException {
return loadEdits(editStreams, target, null, 
null);
  }
MetaRecoveryContext is also null.

Also, I don't know why toAtLeastTxId is always set to 0 in the implementation, 
making the checkForGaps() method no use. If we set it to some other value, like 
lastTxnId + 1, the multiple replay should be rejected.

> Add a configuration option to enable in-progress edit log tailing
> -
>
> Key: HDFS-10519
> URL: https://issues.apache.org/jira/browse/HDFS-10519
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10519.001.patch, HDFS-10519.002.patch
>
>
> Standby Namenode has the option to do in-progress edit log tailing to improve 
> the data freshness. In-progress tailing is already implemented, but it's not 
> enabled as default configuration. And there's no related configuration key to 
> turn it on.
> Adding a related configuration key to let Standby Namenode is reasonable and 
> would be a basis for further improvement on Standby Namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9016) Display upgrade domain information in fsck

2016-06-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330008#comment-15330008
 ] 

Allen Wittenauer commented on HDFS-9016:


You'll need to re-upload the trunk patch.  You can't submit two patches at once 
to precommit; it only uses the last one attached.

> Display upgrade domain information in fsck
> --
>
> Key: HDFS-9016
> URL: https://issues.apache.org/jira/browse/HDFS-9016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9016-2.patch, HDFS-9016-3.patch, HDFS-9016-4.patch, 
> HDFS-9016-branch-2-2.patch, HDFS-9016-branch-2.patch, HDFS-9016.patch
>
>
> This will make it easy for people to use fsck to check block placement when 
> upgrade domain is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9016) Display upgrade domain information in fsck

2016-06-14 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9016:
--
Attachment: HDFS-9016-branch-2-2.patch

New branch-2 patch to fix checkstyle issues.

> Display upgrade domain information in fsck
> --
>
> Key: HDFS-9016
> URL: https://issues.apache.org/jira/browse/HDFS-9016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9016-2.patch, HDFS-9016-3.patch, HDFS-9016-4.patch, 
> HDFS-9016-branch-2-2.patch, HDFS-9016-branch-2.patch, HDFS-9016.patch
>
>
> This will make it easy for people to use fsck to check block placement when 
> upgrade domain is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10525) Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap

2016-06-14 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15329994#comment-15329994
 ] 

Xiao Chen commented on HDFS-10525:
--

An example stack trace is
{noformat}
2016-06-13 15:20:32,769 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: 
Rescanning because of pending operations
2016-06-13 15:20:32,770 ERROR 
org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Thread 
exiting
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.rescanCachedBlockMap(CacheReplicationMonitor.java:507)
at 
org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.rescan(CacheReplicationMonitor.java:305)
at 
org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:191)
{noformat}

We should handle the case where block is not available from NN.

> Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap
> ---
>
> Key: HDFS-10525
> URL: https://issues.apache.org/jira/browse/HDFS-10525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.8.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10525) Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap

2016-06-14 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-10525:


 Summary: Fix NPE in CacheReplicationMonitor#rescanCachedBlockMap
 Key: HDFS-10525
 URL: https://issues.apache.org/jira/browse/HDFS-10525
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: caching
Affects Versions: 2.8.0
Reporter: Xiao Chen
Assignee: Xiao Chen






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10505) OIV's ReverseXML processor should support ACLs

2016-06-14 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15329928#comment-15329928
 ] 

Surendra Singh Lilhore commented on HDFS-10505:
---

Whitespace warnings are unrelated to this patch.

> OIV's ReverseXML processor should support ACLs
> --
>
> Key: HDFS-10505
> URL: https://issues.apache.org/jira/browse/HDFS-10505
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-10505-001.patch
>
>
> OIV's ReverseXML processor should support ACLs.  Currently ACLs show up in 
> the fsimage.xml file, but we don't reconstruct them with ReverseXML.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8046) Allow better control of getContentSummary

2016-06-14 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15329917#comment-15329917
 ] 

Xiao Chen commented on HDFS-8046:
-

Thanks for the quick action Kihwal!

> Allow better control of getContentSummary
> -
>
> Key: HDFS-8046
> URL: https://issues.apache.org/jira/browse/HDFS-8046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>  Labels: 2.6.1-candidate, 2.7.2-candidate
> Fix For: 2.6.1, 2.7.2
>
> Attachments: HDFS-8046-branch-2.6.1.txt, HDFS-8046.v1.patch
>
>
> On busy clusters, users performing quota checks against a big directory 
> structure can affect the namenode performance. It has become a lot better 
> after HDFS-4995, but as clusters get bigger and busier, it is apparent that 
> we need finer grain control to avoid long read lock causing throughput drop.
> Even with unfair namesystem lock setting, a long read lock (10s of 
> milliseconds) can starve many readers and especially writers. So the locking 
> duration should be reduced, which can be done by imposing a lower 
> count-per-iteration limit in the existing implementation.  But HDFS-4995 came 
> with a fixed amount of sleep between locks. This needs to be made 
> configurable, so that {{getContentSummary()}} doesn't get exceedingly slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10521) hdfs dfs -du -s / returns incorrect summary

2016-06-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen resolved HDFS-10521.
--
Resolution: Duplicate

This was done during the time jira wasn't stable, so I ended up creating 
several of these...

> hdfs dfs -du -s / returns incorrect summary
> ---
>
> Key: HDFS-10521
> URL: https://issues.apache.org/jira/browse/HDFS-10521
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>
> {{hdfs dfs -du -s /}} sometimes returns incomplete calculation, if file count 
> is larger than {{dfs.content-summary.limit}} configured (default=5000).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10522) hdfs dfs -du -s / may return incorrect summary

2016-06-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen resolved HDFS-10522.
--
Resolution: Duplicate

This was done during the time jira wasn't stable, so I ended up creating 
several of these...

> hdfs dfs -du -s / may return incorrect summary
> --
>
> Key: HDFS-10522
> URL: https://issues.apache.org/jira/browse/HDFS-10522
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>
> {{hdfs dfs -du -s /}} sometimes returns incomplete calculation, if file count 
> is larger than {{dfs.content-summary.limit}} configured (default=5000).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10524) libhdfs++: Implement chmod and chown

2016-06-14 Thread Anatoli Shein (JIRA)
Anatoli Shein created HDFS-10524:


 Summary: libhdfs++: Implement chmod and chown
 Key: HDFS-10524
 URL: https://issues.apache.org/jira/browse/HDFS-10524
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Anatoli Shein






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10524) libhdfs++: Implement chmod and chown

2016-06-14 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein reassigned HDFS-10524:


Assignee: Anatoli Shein

> libhdfs++: Implement chmod and chown
> 
>
> Key: HDFS-10524
> URL: https://issues.apache.org/jira/browse/HDFS-10524
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10519) Add a configuration option to enable in-progress edit log tailing

2016-06-14 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15329706#comment-15329706
 ] 

Kihwal Lee commented on HDFS-10519:
---

How does it actually work? Can it replay the same edit log segment multiple 
times? It might have to replay the same in-progress segment multiple times or 
after it is finalized.
I thought it would blow up with {{MetaRecoveryContext}} being null.

> Add a configuration option to enable in-progress edit log tailing
> -
>
> Key: HDFS-10519
> URL: https://issues.apache.org/jira/browse/HDFS-10519
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10519.001.patch, HDFS-10519.002.patch
>
>
> Standby Namenode has the option to do in-progress edit log tailing to improve 
> the data freshness. In-progress tailing is already implemented, but it's not 
> enabled as default configuration. And there's no related configuration key to 
> turn it on.
> Adding a related configuration key to let Standby Namenode is reasonable and 
> would be a basis for further improvement on Standby Namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10515) libhdfs++: Implement mkdirs, rmdir, rename, and remove

2016-06-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15329639#comment-15329639
 ] 

Hadoop QA commented on HDFS-10515:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 53s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
6s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 4s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 2s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 59s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 45s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 34s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810398/HDFS-10515.HDFS-8707.000.patch
 |
| JIRA Issue | HDFS-10515 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux ec52fb970317 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 7c1d5df |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15764/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15764/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Implement mkdirs, rmdir, rename, and remove
> --
>
> Key: HDFS-10515
> URL: https://issues.apache.org/jira/browse/HDFS-10515
> Pro

[jira] [Updated] (HDFS-10515) libhdfs++: Implement mkdirs, rmdir, rename, and remove

2016-06-14 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-10515:
-
Attachment: HDFS-10515.HDFS-8707.000.patch

Patch is attached. Please review.

> libhdfs++: Implement mkdirs, rmdir, rename, and remove
> --
>
> Key: HDFS-10515
> URL: https://issues.apache.org/jira/browse/HDFS-10515
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10515.HDFS-8707.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10515) libhdfs++: Implement mkdirs, rmdir, rename, and remove

2016-06-14 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-10515:
-
Status: Patch Available  (was: Open)

> libhdfs++: Implement mkdirs, rmdir, rename, and remove
> --
>
> Key: HDFS-10515
> URL: https://issues.apache.org/jira/browse/HDFS-10515
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8046) Allow better control of getContentSummary

2016-06-14 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15329542#comment-15329542
 ] 

Kihwal Lee commented on HDFS-8046:
--

[~xiaochen], thanks for letting me know. I've cherry-picked the fix to 2.6 and 
2.7.

> Allow better control of getContentSummary
> -
>
> Key: HDFS-8046
> URL: https://issues.apache.org/jira/browse/HDFS-8046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>  Labels: 2.6.1-candidate, 2.7.2-candidate
> Fix For: 2.6.1, 2.7.2
>
> Attachments: HDFS-8046-branch-2.6.1.txt, HDFS-8046.v1.patch
>
>
> On busy clusters, users performing quota checks against a big directory 
> structure can affect the namenode performance. It has become a lot better 
> after HDFS-4995, but as clusters get bigger and busier, it is apparent that 
> we need finer grain control to avoid long read lock causing throughput drop.
> Even with unfair namesystem lock setting, a long read lock (10s of 
> milliseconds) can starve many readers and especially writers. So the locking 
> duration should be reduced, which can be done by imposing a lower 
> count-per-iteration limit in the existing implementation.  But HDFS-4995 came 
> with a fixed amount of sleep between locks. This needs to be made 
> configurable, so that {{getContentSummary()}} doesn't get exceedingly slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8581) ContentSummary on / skips further counts on yielding lock

2016-06-14 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-8581:
-
Fix Version/s: (was: 2.8.0)
   2.6.5
   2.7.3

> ContentSummary on / skips further counts on yielding lock
> -
>
> Key: HDFS-8581
> URL: https://issues.apache.org/jira/browse/HDFS-8581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: tongshiquan
>Assignee: J.Andreina
>Priority: Minor
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HDFS-8581.1.patch, HDFS-8581.2.patch, HDFS-8581.3.patch, 
> HDFS-8581.4.patch
>
>
> If one directory such as "/result" exists about 20 files, then when 
> execute "hdfs dfs -count /", the result will go wrong. For all directories 
> whose name after "/result", file num will not be included.
> My cluster see as below, "/result_1433858936" is the directory exist huge 
> files, and files in "/sparkJobHistory", "/tmp", "/user" are not included
> vm-221:/export1/BigData/current # hdfs dfs -ls /
> 15/06/11 11:00:17 INFO hdfs.PeerCache: SocketCache disabled.
> Found 9 items
> -rw-r--r--   3 hdfs   supergroup  0 2015-06-08 12:10 
> /PRE_CREATE_DIR.SUCCESS
> drwxr-x---   - flume  hadoop  0 2015-06-08 12:08 /flume
> drwx--   - hbase  hadoop  0 2015-06-10 15:25 /hbase
> drwxr-xr-x   - hdfs   supergroup  0 2015-06-10 17:19 /hyt
> drwxrwxrwx   - mapred hadoop  0 2015-06-08 12:08 /mr-history
> drwxr-xr-x   - hdfs   supergroup  0 2015-06-09 22:10 
> /result_1433858936
> drwxrwxrwx   - spark  supergroup  0 2015-06-10 19:15 /sparkJobHistory
> drwxrwxrwx   - hdfs   hadoop  0 2015-06-08 12:14 /tmp
> drwxrwxrwx   - hdfs   hadoop  0 2015-06-09 21:57 /user
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /
> 15/06/11 11:00:24 INFO hdfs.PeerCache: SocketCache disabled.
> 1043   171536 1756375688 /
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /PRE_CREATE_DIR.SUCCESS
> 15/06/11 11:00:30 INFO hdfs.PeerCache: SocketCache disabled.
>01  0 /PRE_CREATE_DIR.SUCCESS
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /flume
> 15/06/11 11:00:41 INFO hdfs.PeerCache: SocketCache disabled.
>10  0 /flume
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /hbase
> 15/06/11 11:00:49 INFO hdfs.PeerCache: SocketCache disabled.
>   36   18  14807 /hbase
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /hyt
> 15/06/11 11:01:09 INFO hdfs.PeerCache: SocketCache disabled.
>10  0 /hyt
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /mr-history
> 15/06/11 11:01:18 INFO hdfs.PeerCache: SocketCache disabled.
>30  0 /mr-history
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /result_1433858936
> 15/06/11 11:01:29 INFO hdfs.PeerCache: SocketCache disabled.
> 1001   171517 1756360881 /result_1433858936
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /sparkJobHistory
> 15/06/11 11:01:41 INFO hdfs.PeerCache: SocketCache disabled.
>13  21785 /sparkJobHistory
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /tmp
> 15/06/11 11:01:48 INFO hdfs.PeerCache: SocketCache disabled.
>   176  35958 /tmp
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /user
> 15/06/11 11:01:55 INFO hdfs.PeerCache: SocketCache disabled.
>   121  19077 /user



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8581) ContentSummary on / skips further counts on yielding lock

2016-06-14 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15329502#comment-15329502
 ] 

Kihwal Lee commented on HDFS-8581:
--

Cherry-picked to branch-2.7 and branch-2.6. Thanks again for reporting and 
fixing the issue.

> ContentSummary on / skips further counts on yielding lock
> -
>
> Key: HDFS-8581
> URL: https://issues.apache.org/jira/browse/HDFS-8581
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: tongshiquan
>Assignee: J.Andreina
>Priority: Minor
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HDFS-8581.1.patch, HDFS-8581.2.patch, HDFS-8581.3.patch, 
> HDFS-8581.4.patch
>
>
> If one directory such as "/result" exists about 20 files, then when 
> execute "hdfs dfs -count /", the result will go wrong. For all directories 
> whose name after "/result", file num will not be included.
> My cluster see as below, "/result_1433858936" is the directory exist huge 
> files, and files in "/sparkJobHistory", "/tmp", "/user" are not included
> vm-221:/export1/BigData/current # hdfs dfs -ls /
> 15/06/11 11:00:17 INFO hdfs.PeerCache: SocketCache disabled.
> Found 9 items
> -rw-r--r--   3 hdfs   supergroup  0 2015-06-08 12:10 
> /PRE_CREATE_DIR.SUCCESS
> drwxr-x---   - flume  hadoop  0 2015-06-08 12:08 /flume
> drwx--   - hbase  hadoop  0 2015-06-10 15:25 /hbase
> drwxr-xr-x   - hdfs   supergroup  0 2015-06-10 17:19 /hyt
> drwxrwxrwx   - mapred hadoop  0 2015-06-08 12:08 /mr-history
> drwxr-xr-x   - hdfs   supergroup  0 2015-06-09 22:10 
> /result_1433858936
> drwxrwxrwx   - spark  supergroup  0 2015-06-10 19:15 /sparkJobHistory
> drwxrwxrwx   - hdfs   hadoop  0 2015-06-08 12:14 /tmp
> drwxrwxrwx   - hdfs   hadoop  0 2015-06-09 21:57 /user
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /
> 15/06/11 11:00:24 INFO hdfs.PeerCache: SocketCache disabled.
> 1043   171536 1756375688 /
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /PRE_CREATE_DIR.SUCCESS
> 15/06/11 11:00:30 INFO hdfs.PeerCache: SocketCache disabled.
>01  0 /PRE_CREATE_DIR.SUCCESS
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /flume
> 15/06/11 11:00:41 INFO hdfs.PeerCache: SocketCache disabled.
>10  0 /flume
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /hbase
> 15/06/11 11:00:49 INFO hdfs.PeerCache: SocketCache disabled.
>   36   18  14807 /hbase
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /hyt
> 15/06/11 11:01:09 INFO hdfs.PeerCache: SocketCache disabled.
>10  0 /hyt
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /mr-history
> 15/06/11 11:01:18 INFO hdfs.PeerCache: SocketCache disabled.
>30  0 /mr-history
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /result_1433858936
> 15/06/11 11:01:29 INFO hdfs.PeerCache: SocketCache disabled.
> 1001   171517 1756360881 /result_1433858936
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /sparkJobHistory
> 15/06/11 11:01:41 INFO hdfs.PeerCache: SocketCache disabled.
>13  21785 /sparkJobHistory
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /tmp
> 15/06/11 11:01:48 INFO hdfs.PeerCache: SocketCache disabled.
>   176  35958 /tmp
> vm-221:/export1/BigData/current # 
> vm-221:/export1/BigData/current # hdfs dfs -count /user
> 15/06/11 11:01:55 INFO hdfs.PeerCache: SocketCache disabled.
>   121  19077 /user



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10522) hdfs dfs -du -s / may return incorrect summary

2016-06-14 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15329331#comment-15329331
 ] 

Wei-Chiu Chuang commented on HDFS-10522:


Is this the duplication of HDFS-10521?

> hdfs dfs -du -s / may return incorrect summary
> --
>
> Key: HDFS-10522
> URL: https://issues.apache.org/jira/browse/HDFS-10522
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>
> {{hdfs dfs -du -s /}} sometimes returns incomplete calculation, if file count 
> is larger than {{dfs.content-summary.limit}} configured (default=5000).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10413) Implement asynchronous listStatus for DistributedFileSystem

2016-06-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15329298#comment-15329298
 ] 

Steve Loughran commented on HDFS-10413:
---

Can I note that from the perspective of S3a, using listFiles(recursive=true) is 
significantly faster than using listStatus(). If code were encouraged to use 
that API rather than their own treewalk, then anything that works with object 
stores would see significant speedup.

Also, listFiles and similar use the RemoteIterator. That code can be async, to 
the extent that the results can be arriving while the client is processing the 
previous results. The code I'm doing in HADOOP-13208 doesn't do that, but it 
does do windowed queries; you only get a window-full of files listed, filtered 
and made available at a time. This keeps memory consumption down.

> Implement asynchronous listStatus for DistributedFileSystem
> ---
>
> Key: HDFS-10413
> URL: https://issues.apache.org/jira/browse/HDFS-10413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Per the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9924?focusedCommentId=15285597&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15285597]
>  from [~mingma], this Jira tracks efforts of implementing async listStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-06-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15329291#comment-15329291
 ] 

Steve Loughran commented on HDFS-9924:
--

Stack. I said it, You don't need any other opinions :)

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org