[jira] [Updated] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10493:
---
Attachment: dn_2_alive_1_down.002.jpg

> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10493.001.patch, NN_DN_Links.jpg, 
> dn_2_alive_1_down.002.jpg, secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10493:
---
Status: In Progress  (was: Patch Available)

> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10493.001.patch, NN_DN_Links.jpg, 
> dn_2_alive_1_down.002.jpg, secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10493:
---
Attachment: dn_http_addr_links.002.jpg

> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10493.001.patch, NN_DN_Links.jpg, 
> dn_2_alive_1_down.002.jpg, dn_http_addr_links.002.jpg, secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335655#comment-15335655
 ] 

Weiwei Yang commented on HDFS-10493:


Hello [~kihwal]

I agree with your comment, so I add a column "Http Address" for the info links, 
this is just like the links on Yarn nodes page. I just uploaded v2 patch, the 
screen shot can be seen here [^dn_http_addr_links.002.jpg] when there is some 
datanodes down, the UI looks like [^dn_2_alive_1_down.002.jpg]. Please help to 
review, appreciate your suggestions.

Thank you :)

> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10493.001.patch, NN_DN_Links.jpg, 
> dn_2_alive_1_down.002.jpg, dn_http_addr_links.002.jpg, secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10493:
---
Attachment: HDFS-10493.002.patch

> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10493.001.patch, HDFS-10493.002.patch, 
> NN_DN_Links.jpg, dn_2_alive_1_down.002.jpg, dn_http_addr_links.002.jpg, 
> secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10493:
---
Status: Patch Available  (was: In Progress)

> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10493.001.patch, HDFS-10493.002.patch, 
> NN_DN_Links.jpg, dn_2_alive_1_down.002.jpg, dn_http_addr_links.002.jpg, 
> secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335680#comment-15335680
 ] 

Hadoop QA commented on HDFS-10493:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 38s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811318/HDFS-10493.002.patch |
| JIRA Issue | HDFS-10493 |
| Optional Tests |  asflicense  |
| uname | Linux 1cb7e7cbfaac 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 09e82ac |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15802/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10493.001.patch, HDFS-10493.002.patch, 
> NN_DN_Links.jpg, dn_2_alive_1_down.002.jpg, dn_http_addr_links.002.jpg, 
> secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10474) hftp copy fails when file name with Chinese+special char in branch-2

2016-06-17 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-10474:

Attachment: HDFS-10474-branch-2.7-001.patch

> hftp copy fails when file name with Chinese+special char in branch-2
> 
>
> Key: HDFS-10474
> URL: https://issues.apache.org/jira/browse/HDFS-10474
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10474-branch-2-001.patch, 
> HDFS-10474-branch-2-002.patch, HDFS-10474-branch-2.7-001.patch
>
>
> I have seen this , while using distcp
> {noformat}
> 16/05/23 20:35:34 INFO mapreduce.Job: Task Id : 
> attempt_1463564396851_0010_m_03_0, Status : FAILED
> Error: java.io.IOException: File copy failed: 
> hftp://*:25000/tmp/???@2X.png --> hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:285)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:180)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:174)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
> hftp://***:25000/tmp/节节高@2X.png to hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
>   ... 10 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-06-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10440:
---
Status: In Progress  (was: Patch Available)

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, datanode_2nns.html.002.jpg, datanode_html.001.jpg, 
> datanode_loading_err.002.jpg, datanode_utilities.001.jpg, 
> datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, 
> nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-06-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10440:
---
Attachment: HDFS-10440.004.patch

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, datanode_2nns.html.002.jpg, 
> datanode_html.001.jpg, datanode_loading_err.002.jpg, 
> datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, 
> dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10474) hftp copy fails when file name with Chinese+special char in branch-2

2016-06-17 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335746#comment-15335746
 ] 

Brahma Reddy Battula commented on HDFS-10474:
-

Uploaded the branch-2,7 patch.. will wait for jenkins to commit..[~vinayrpet] 
thanks for your review.

> hftp copy fails when file name with Chinese+special char in branch-2
> 
>
> Key: HDFS-10474
> URL: https://issues.apache.org/jira/browse/HDFS-10474
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10474-branch-2-001.patch, 
> HDFS-10474-branch-2-002.patch, HDFS-10474-branch-2.7-001.patch
>
>
> I have seen this , while using distcp
> {noformat}
> 16/05/23 20:35:34 INFO mapreduce.Job: Task Id : 
> attempt_1463564396851_0010_m_03_0, Status : FAILED
> Error: java.io.IOException: File copy failed: 
> hftp://*:25000/tmp/???@2X.png --> hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:285)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
>   at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:180)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1672)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:174)
> Caused by: java.io.IOException: Couldn't run retriable-command: Copying 
> hftp://***:25000/tmp/节节高@2X.png to hdfs://hacluster/cxf7/tmp/节节高@2X.png
>   at 
> org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101)
>   at 
> org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
>   ... 10 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-06-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335756#comment-15335756
 ] 

Weiwei Yang commented on HDFS-10440:


Hello [~vinayrpet]

That's a good point, I just modified the patch to avoid of breaking 
compatibility, just uploaded  [v4 patch | ^HDFS-10440.004.patch]. I added a new 
entry *getBPServiceActorInfo()* in DatanodeMXBean, I tend to use this instead 
of getNameNodesInfo because it sounds more accurate of the information it 
returns, but I am open for suggestions :) . Please help to review and let me 
know if this is good to you. 

Thanks

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, datanode_2nns.html.002.jpg, 
> datanode_html.001.jpg, datanode_loading_err.002.jpg, 
> datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, 
> dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-06-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10440:
---
Hadoop Flags:   (was: Incompatible change)
  Status: Patch Available  (was: In Progress)

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, datanode_2nns.html.002.jpg, 
> datanode_html.001.jpg, datanode_loading_err.002.jpg, 
> datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, 
> dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335764#comment-15335764
 ] 

Weiwei Yang commented on HDFS-10493:


Just a note, though this one is marked as related to HDFS-10440, but the patch 
is self-contained and it can be reviewed and committed independently. 

> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10493.001.patch, HDFS-10493.002.patch, 
> NN_DN_Links.jpg, dn_2_alive_1_down.002.jpg, dn_http_addr_links.002.jpg, 
> secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-06-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10440:
---
Target Version/s: 2.8.0, 2.9.0, 3.0.0-alpha1  (was: 2.9.0, 3.0.0-alpha1)

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, datanode_2nns.html.002.jpg, 
> datanode_html.001.jpg, datanode_loading_err.002.jpg, 
> datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, 
> dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10493:
---
Target Version/s: 2.8.0, 2.9.0, 3.0.0-alpha1  (was: 2.9.0, 3.0.0-alpha1)

> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10493.001.patch, HDFS-10493.002.patch, 
> NN_DN_Links.jpg, dn_2_alive_1_down.002.jpg, dn_http_addr_links.002.jpg, 
> secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10474) hftp copy fails when file name with Chinese+special char in branch-2

2016-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335795#comment-15335795
 ] 

Hadoop QA commented on HDFS-10474:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 14s 
{color} | {color:red} root in branch-2.7 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 10s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} branch-2.7 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-common in branch-2.7 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-hdfs in branch-2.7 failed. {color} |
| {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-common in branch-2.7 failed. {color} |
| {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 7s 
{color} | {color:red} hadoop-hdfs in branch-2.7 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 7s 
{color} | {color:red} hadoop-common in branch-2.7 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 8s 
{color} | {color:red} hadoop-hdfs in branch-2.7 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 7s 
{color} | {color:red} hadoop-common in branch-2.7 failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 7s 
{color} | {color:red} hadoop-hdfs in branch-2.7 failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 8s 
{color} | {color:red} hadoop-common in branch-2.7 failed with JDK v1.7.0_101. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 7s 
{color} | {color:red} hadoop-hdfs in branch-2.7 failed with JDK v1.7.0_101. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 6s 
{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 5s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2374 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 1m 1s 
{

[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335837#comment-15335837
 ] 

Hadoop QA commented on HDFS-10440:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 
new + 208 unchanged - 0 fixed = 213 total (was 208) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 48s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 40s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 0s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Unread field:BPServiceActor.java:[line 1022] |
| Failed junit tests | hadoop.hdfs.TestAsyncHDFSWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811327/HDFS-10440.004.patch |
| JIRA Issue | HDFS-10440 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f727a1380f11 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 09e82ac |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15804/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15804/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15804/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15804/artifact/patchprocess/patch-unit-h

[jira] [Commented] (HDFS-9924) [umbrella] Nonblocking HDFS Access

2016-06-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335861#comment-15335861
 ] 

Steve Loughran commented on HDFS-9924:
--

That's the kind of thing to consider for a broad API, exactly what the full 
against-trunk, in-java8 work should do. Having implementations against more 
than just HDFS helps define an API that is stricter than just "this is what 
HDFS does",  forces in tests which run across the filesystems, so immediately 
find inconsistencies and regressions. 

I'd like an async delete too; on object stores it is O(objects), and inherently 
inconsistent anyway; you can't rely on the delete(dir) returning as meaning 
that you  get a FNFE on any getFileStatus or open() down the tree.

> [umbrella] Nonblocking HDFS Access
> --
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: Async-HDFS-Performance-Report.pdf, AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Nonblocking HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support nonblocking calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10524) libhdfs++: Implement chmod and chown

2016-06-17 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-10524:
-
Attachment: HDFS-10524.HDFS-8707.001.patch

Here is a new patch. Please review.

> libhdfs++: Implement chmod and chown
> 
>
> Key: HDFS-10524
> URL: https://issues.apache.org/jira/browse/HDFS-10524
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10524.HDFS-8707.000.patch, 
> HDFS-10524.HDFS-8707.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6434) Default permission for creating file should be 644 for WebHdfs/HttpFS

2016-06-17 Thread Wellington Chevreuil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335984#comment-15335984
 ] 

Wellington Chevreuil commented on HDFS-6434:


Hi [~andrew.wang], thanks for the comments. Per the umask and FsPermissions 
methods suggestion, this has been discussed on 
[HDFS-10488|https://issues.apache.org/jira/browse/HDFS-10488], where original 
proposal was to apply same behaviour to WebHDFS.

However, conclusions were that it should not apply any umask, but the default 
permissions for files and directories should be fixed as suggested here

> Default permission for creating file should be 644 for WebHdfs/HttpFS
> -
>
> Key: HDFS-6434
> URL: https://issues.apache.org/jira/browse/HDFS-6434
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Juan Yu
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-6434.002.patch, HDFS-6434.patch
>
>
> Creating a file by using WebHdfs without specify permission. file is created 
> with permission 755. it should be 644.
> WebHdfs seems using the same default permission for both file and directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block

2016-06-17 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10460:

Attachment: HDFS-10460-00.patch

> Erasure Coding: Recompute block checksum for a particular range less than 
> file size on the fly by reconstructing missed block
> -
>
> Key: HDFS-10460
> URL: https://issues.apache.org/jira/browse/HDFS-10460
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10460-00.patch
>
>
> This jira is HDFS-9833 follow-on task to address reconstructing block and 
> then recalculating block checksum for a particular range query.
> For example,
> {code}
> // create a file 'stripedFile1' with fileSize = cellSize * numDataBlocks = 
> 65536 * 6 = 393216
> FileChecksum stripedFileChecksum = getFileChecksum(stripedFile1, 10, true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block

2016-06-17 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10460:

Status: Patch Available  (was: Open)

[~drankye], Attached patch with few test cases. Kindly review the patch. Thanks!

> Erasure Coding: Recompute block checksum for a particular range less than 
> file size on the fly by reconstructing missed block
> -
>
> Key: HDFS-10460
> URL: https://issues.apache.org/jira/browse/HDFS-10460
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10460-00.patch
>
>
> This jira is HDFS-9833 follow-on task to address reconstructing block and 
> then recalculating block checksum for a particular range query.
> For example,
> {code}
> // create a file 'stripedFile1' with fileSize = cellSize * numDataBlocks = 
> 65536 * 6 = 393216
> FileChecksum stripedFileChecksum = getFileChecksum(stripedFile1, 10, true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9530) huge Non-DFS Used in hadoop 2.6.2 & 2.7.1

2016-06-17 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9530:
---
Attachment: HDFS-9530-branch-2.7-002.patch
HDFS-9530-03.patch

> huge Non-DFS Used in hadoop 2.6.2 & 2.7.1
> -
>
> Key: HDFS-9530
> URL: https://issues.apache.org/jira/browse/HDFS-9530
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Fei Hui
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9530-01.patch, HDFS-9530-02.patch, 
> HDFS-9530-03.patch, HDFS-9530-branch-2.7-001.patch, 
> HDFS-9530-branch-2.7-002.patch
>
>
> i think there are bugs in HDFS
> ===
> here is config
>   
> dfs.datanode.data.dir
> 
> 
> file:///mnt/disk4,file:///mnt/disk1,file:///mnt/disk3,file:///mnt/disk2
> 
>   
> here is dfsadmin report 
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 238604832768 (222.22 GB)
> DFS Remaining: 215772954624 (200.95 GB)
> DFS Used: 22831878144 (21.26 GB)
> DFS Used%: 9.57%
> Under replicated blocks: 4
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7190958080 (6.70 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72343986176 (67.38 GB)
> DFS Used%: 8.96%
> DFS Remaining%: 90.14%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:02 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7219073024 (6.72 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72315871232 (67.35 GB)
> DFS Used%: 9.00%
> DFS Remaining%: 90.11%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 8421847040 (7.84 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 71113097216 (66.23 GB)
> DFS Used%: 10.49%
> DFS Remaining%: 88.61%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> 
> when running hive job , dfsadmin report as follows
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 108266011136 (100.83 GB)
> DFS Remaining: 80078416384 (74.58 GB)
> DFS Used: 28187594752 (26.25 GB)
> DFS Used%: 26.04%
> Under replicated blocks: 7
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9015627776 (8.40 GB)
> Non DFS Used: 44303742464 (41.26 GB)
> DFS Remaining: 26937047552 (25.09 GB)
> DFS Used%: 11.23%
> DFS Remaining%: 33.56%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 693
> Last contact: Wed Dec 09 15:37:35 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9163116544 (8.53 GB)
> Non DFS Used: 47895897600 (44.61 GB)
> DFS Remaining: 23197403648 (21.60 GB)
> DFS Used%: 11.42%
> DFS Remaining%: 28.90%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 750
> Last contact: Wed Dec 09 15:37:36 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 10008850432 (9.32 GB)
> Non DFS Used: 40303602176 (37.54 GB)
> DFS Remaining: 29943965184 (27.89 GB)
> DFS Used%: 12.47%
> DFS Remaining%: 37.31%
> Configured Cache Capacit

[jira] [Commented] (HDFS-9530) huge Non-DFS Used in hadoop 2.6.2 & 2.7.1

2016-06-17 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336125#comment-15336125
 ] 

Brahma Reddy Battula commented on HDFS-9530:


[~arpitagarwal] thanks for catch.

bq.The failMirrorConnection hook looks unused. Also the DN should be updated to 
call the hook.

Yes, missed this from HDFS-9530-01 to HDFS-9530-02.. Now I uploaded..

> huge Non-DFS Used in hadoop 2.6.2 & 2.7.1
> -
>
> Key: HDFS-9530
> URL: https://issues.apache.org/jira/browse/HDFS-9530
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Fei Hui
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9530-01.patch, HDFS-9530-02.patch, 
> HDFS-9530-03.patch, HDFS-9530-branch-2.7-001.patch, 
> HDFS-9530-branch-2.7-002.patch
>
>
> i think there are bugs in HDFS
> ===
> here is config
>   
> dfs.datanode.data.dir
> 
> 
> file:///mnt/disk4,file:///mnt/disk1,file:///mnt/disk3,file:///mnt/disk2
> 
>   
> here is dfsadmin report 
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 238604832768 (222.22 GB)
> DFS Remaining: 215772954624 (200.95 GB)
> DFS Used: 22831878144 (21.26 GB)
> DFS Used%: 9.57%
> Under replicated blocks: 4
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7190958080 (6.70 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72343986176 (67.38 GB)
> DFS Used%: 8.96%
> DFS Remaining%: 90.14%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:02 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7219073024 (6.72 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72315871232 (67.35 GB)
> DFS Used%: 9.00%
> DFS Remaining%: 90.11%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 8421847040 (7.84 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 71113097216 (66.23 GB)
> DFS Used%: 10.49%
> DFS Remaining%: 88.61%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> 
> when running hive job , dfsadmin report as follows
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 108266011136 (100.83 GB)
> DFS Remaining: 80078416384 (74.58 GB)
> DFS Used: 28187594752 (26.25 GB)
> DFS Used%: 26.04%
> Under replicated blocks: 7
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9015627776 (8.40 GB)
> Non DFS Used: 44303742464 (41.26 GB)
> DFS Remaining: 26937047552 (25.09 GB)
> DFS Used%: 11.23%
> DFS Remaining%: 33.56%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 693
> Last contact: Wed Dec 09 15:37:35 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9163116544 (8.53 GB)
> Non DFS Used: 47895897600 (44.61 GB)
> DFS Remaining: 23197403648 (21.60 GB)
> DFS Used%: 11.42%
> DFS Remaining%: 28.90%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 750
> Last contact: Wed Dec 09 15:37:36 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS 

[jira] [Commented] (HDFS-10524) libhdfs++: Implement chmod and chown

2016-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336139#comment-15336139
 ] 

Hadoop QA commented on HDFS-10524:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 41s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
14s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 16s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 14s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 44s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 47s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 31s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811358/HDFS-10524.HDFS-8707.001.patch
 |
| JIRA Issue | HDFS-10524 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 07f0084c5524 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 6c264b1 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15806/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15806/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Implement chmod and chown
> 
>
> Key: HDFS-10524
> URL: https://issues.apache.org/jira/browse/HDFS-10524
> Project: Hadoop HDFS
>  Issu

[jira] [Updated] (HDFS-9530) huge Non-DFS Used in hadoop 2.6.2 & 2.7.1

2016-06-17 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9530:
---
Status: Patch Available  (was: Reopened)

> huge Non-DFS Used in hadoop 2.6.2 & 2.7.1
> -
>
> Key: HDFS-9530
> URL: https://issues.apache.org/jira/browse/HDFS-9530
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Fei Hui
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9530-01.patch, HDFS-9530-02.patch, 
> HDFS-9530-03.patch, HDFS-9530-branch-2.7-001.patch, 
> HDFS-9530-branch-2.7-002.patch
>
>
> i think there are bugs in HDFS
> ===
> here is config
>   
> dfs.datanode.data.dir
> 
> 
> file:///mnt/disk4,file:///mnt/disk1,file:///mnt/disk3,file:///mnt/disk2
> 
>   
> here is dfsadmin report 
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 238604832768 (222.22 GB)
> DFS Remaining: 215772954624 (200.95 GB)
> DFS Used: 22831878144 (21.26 GB)
> DFS Used%: 9.57%
> Under replicated blocks: 4
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7190958080 (6.70 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72343986176 (67.38 GB)
> DFS Used%: 8.96%
> DFS Remaining%: 90.14%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:02 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7219073024 (6.72 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72315871232 (67.35 GB)
> DFS Used%: 9.00%
> DFS Remaining%: 90.11%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 8421847040 (7.84 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 71113097216 (66.23 GB)
> DFS Used%: 10.49%
> DFS Remaining%: 88.61%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> 
> when running hive job , dfsadmin report as follows
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 108266011136 (100.83 GB)
> DFS Remaining: 80078416384 (74.58 GB)
> DFS Used: 28187594752 (26.25 GB)
> DFS Used%: 26.04%
> Under replicated blocks: 7
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9015627776 (8.40 GB)
> Non DFS Used: 44303742464 (41.26 GB)
> DFS Remaining: 26937047552 (25.09 GB)
> DFS Used%: 11.23%
> DFS Remaining%: 33.56%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 693
> Last contact: Wed Dec 09 15:37:35 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9163116544 (8.53 GB)
> Non DFS Used: 47895897600 (44.61 GB)
> DFS Remaining: 23197403648 (21.60 GB)
> DFS Used%: 11.42%
> DFS Remaining%: 28.90%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 750
> Last contact: Wed Dec 09 15:37:36 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 10008850432 (9.32 GB)
> Non DFS Used: 40303602176 (37.54 GB)
> DFS Remaining: 29943965184 (27.89 GB)
> DFS Used%: 12.47%
> DFS Remaining%: 37.31%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Ca

[jira] [Updated] (HDFS-9530) huge Non-DFS Used in hadoop 2.6.2 & 2.7.1

2016-06-17 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-9530:
---
Target Version/s: 2.7.3

> huge Non-DFS Used in hadoop 2.6.2 & 2.7.1
> -
>
> Key: HDFS-9530
> URL: https://issues.apache.org/jira/browse/HDFS-9530
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Fei Hui
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9530-01.patch, HDFS-9530-02.patch, 
> HDFS-9530-03.patch, HDFS-9530-branch-2.7-001.patch, 
> HDFS-9530-branch-2.7-002.patch
>
>
> i think there are bugs in HDFS
> ===
> here is config
>   
> dfs.datanode.data.dir
> 
> 
> file:///mnt/disk4,file:///mnt/disk1,file:///mnt/disk3,file:///mnt/disk2
> 
>   
> here is dfsadmin report 
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 238604832768 (222.22 GB)
> DFS Remaining: 215772954624 (200.95 GB)
> DFS Used: 22831878144 (21.26 GB)
> DFS Used%: 9.57%
> Under replicated blocks: 4
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7190958080 (6.70 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72343986176 (67.38 GB)
> DFS Used%: 8.96%
> DFS Remaining%: 90.14%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:02 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7219073024 (6.72 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72315871232 (67.35 GB)
> DFS Used%: 9.00%
> DFS Remaining%: 90.11%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 8421847040 (7.84 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 71113097216 (66.23 GB)
> DFS Used%: 10.49%
> DFS Remaining%: 88.61%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> 
> when running hive job , dfsadmin report as follows
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 108266011136 (100.83 GB)
> DFS Remaining: 80078416384 (74.58 GB)
> DFS Used: 28187594752 (26.25 GB)
> DFS Used%: 26.04%
> Under replicated blocks: 7
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9015627776 (8.40 GB)
> Non DFS Used: 44303742464 (41.26 GB)
> DFS Remaining: 26937047552 (25.09 GB)
> DFS Used%: 11.23%
> DFS Remaining%: 33.56%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 693
> Last contact: Wed Dec 09 15:37:35 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9163116544 (8.53 GB)
> Non DFS Used: 47895897600 (44.61 GB)
> DFS Remaining: 23197403648 (21.60 GB)
> DFS Used%: 11.42%
> DFS Remaining%: 28.90%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 750
> Last contact: Wed Dec 09 15:37:36 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 10008850432 (9.32 GB)
> Non DFS Used: 40303602176 (37.54 GB)
> DFS Remaining: 29943965184 (27.89 GB)
> DFS Used%: 12.47%
> DFS Remaining%: 37.31%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 

[jira] [Commented] (HDFS-9924) [umbrella] Nonblocking HDFS Access

2016-06-17 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336228#comment-15336228
 ] 

stack commented on HDFS-9924:
-

[~steve_l] Thanks for the context. Need to make sure this usecase and its 
variants are talked up loudly over in HADOOP-12910

> [umbrella] Nonblocking HDFS Access
> --
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: Async-HDFS-Performance-Report.pdf, AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Nonblocking HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support nonblocking calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block

2016-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336271#comment-15336271
 ] 

Hadoop QA commented on HDFS-10460:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-hdfs-project: The patch generated 4 new + 114 
unchanged - 0 fixed = 118 total (was 114) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 51s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 1s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.TestAsyncHDFSWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811364/HDFS-10460-00.patch |
| JIRA Issue | HDFS-10460 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 3018e1ca8787 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 09e82ac |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15807/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15807/artifact/patchprocess/patch-unit-had

[jira] [Commented] (HDFS-10474) hftp copy fails when file name with Chinese+special char in branch-2

2016-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336243#comment-15336243
 ] 

Hadoop QA commented on HDFS-10474:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
30s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 12s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
22s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s 
{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} branch-2.7 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 40s 
{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.7 has 3 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 57s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 4037 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 2m 14s 
{color} | {color:red} The patch 138 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 20s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 34s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 56s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_101. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 34s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_101. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | 

[jira] [Created] (HDFS-10541) When no actions in plan, error message says "Plan was generated more than 24 hours ago"

2016-06-17 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-10541:


 Summary: When no actions in plan, error message says "Plan was 
generated more than 24 hours ago"
 Key: HDFS-10541
 URL: https://issues.apache.org/jira/browse/HDFS-10541
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-1312
Reporter: Lei (Eddy) Xu
Priority: Minor


The message is misleading. Instead, it should make it clear that there are no 
steps (or no action) to take in this plan - and should probably not error out.

{code}
16/06/16 14:56:53 INFO command.Command: Executing "execute plan" command
Plan was generated more than 24 hours ago.
at 
org.apache.hadoop.hdfs.server.datanode.DiskBalancer.verifyTimeStamp(DiskBalancer.java:387)
at 
org.apache.hadoop.hdfs.server.datanode.DiskBalancer.verifyPlan(DiskBalancer.java:315)
at 
org.apache.hadoop.hdfs.server.datanode.DiskBalancer.submitPlan(DiskBalancer.java:173)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.submitDiskBalancerPlan(DataNode.java:3059)
at 
org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.submitDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:299)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17509)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
{code}

This happens when a plan looks like the following one:
{code}
{"volumeSetPlans":[],"nodeName":"a.b.c","nodeUUID":null,"port":20001,"timeStamp":0}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Nonblocking HDFS Access

2016-06-17 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336343#comment-15336343
 ] 

Jitendra Nath Pandey commented on HDFS-9924:


  The overhead of feature branch is not just git work, it is also about review 
for the code at merge time, that is already in trunk. Undoing and redoing that 
work is redundant. Also, we need to periodically merge from trunk to the 
feature branch and resolve conflicts if any. Therefore, smaller the delta in 
the feature branch easier it is to manage.
  The feature branch will still be useful for the continued work on the API. 
HDFS-10538 reverts AsyncDFS, we should put that reverted code on the feature 
branch, and any further API development can be continued there. That way, the 
feature branch will have a very small delta, and the desired goal of moving API 
development to the feature branch will be achieved. At this point, committing 
HDFS-10538 is much simpler effort to achieve that. I can quickly commit 
HDFS-10538, if you agree.

bq. Benefits of AsyncDFS to a test package
This is equivalent to removing it, but helps us preserve the unit tests that 
make it easier to maintain rest of the code. 



> [umbrella] Nonblocking HDFS Access
> --
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: Async-HDFS-Performance-Report.pdf, AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Nonblocking HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support nonblocking calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-06-17 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336351#comment-15336351
 ] 

Kihwal Lee commented on HDFS-10440:
---

bq. Address information is redundant, but better than breaking compatibility.
Kihwal Lee, what you think?
I think that's safer and preferred.

Looking at the patch, the organization looks good. Is the last block report 
time going to be in the local time of the browser?

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, datanode_2nns.html.002.jpg, 
> datanode_html.001.jpg, datanode_loading_err.002.jpg, 
> datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, 
> dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-17 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336357#comment-15336357
 ] 

Kihwal Lee commented on HDFS-10493:
---

+1 The patch Looks good

> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10493.001.patch, HDFS-10493.002.patch, 
> NN_DN_Links.jpg, dn_2_alive_1_down.002.jpg, dn_http_addr_links.002.jpg, 
> secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-06-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10440:
---
Status: In Progress  (was: Patch Available)

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, datanode_2nns.html.002.jpg, 
> datanode_html.001.jpg, datanode_loading_err.002.jpg, 
> datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, 
> dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-06-17 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336361#comment-15336361
 ] 

John Zhuge commented on HDFS-6962:
--

[~cnauroth] I will start working on a fully functional patch.

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.1.patch
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-17 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10493:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed to trunk, branch-2 and branch-2.8. Thanks for the path, 
[~cheersyang].

> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0
>
> Attachments: HDFS-10493.001.patch, HDFS-10493.002.patch, 
> NN_DN_Links.jpg, dn_2_alive_1_down.002.jpg, dn_http_addr_links.002.jpg, 
> secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9530) huge Non-DFS Used in hadoop 2.6.2 & 2.7.1

2016-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336382#comment-15336382
 ] 

Hadoop QA commented on HDFS-9530:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
20s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s 
{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
14s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 55s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 30s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 
new + 188 unchanged - 2 fixed = 190 total (was 190) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1967 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 59s 
{color} | {color:red} The patch 99 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 49s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 10s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_101. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 140m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.web.TestWebHdfsTokens |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
| JDK v1.7.0_101 Failed junit tests | 
hadoop.hdfs.serve

[jira] [Comment Edited] (HDFS-10440) Improve DataNode web UI

2016-06-17 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336351#comment-15336351
 ] 

Kihwal Lee edited comment on HDFS-10440 at 6/17/16 4:33 PM:


{quote}
Address information is redundant, but better than breaking compatibility.
Kihwal Lee, what you think?
{quote}
I think that's safer and preferred.

Looking at the patch, the organization looks good. Is the last block report 
time going to be in the local time of the browser?


was (Author: kihwal):
bq. Address information is redundant, but better than breaking compatibility.
Kihwal Lee, what you think?
I think that's safer and preferred.

Looking at the patch, the organization looks good. Is the last block report 
time going to be in the local time of the browser?

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, datanode_2nns.html.002.jpg, 
> datanode_html.001.jpg, datanode_loading_err.002.jpg, 
> datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, 
> dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336425#comment-15336425
 ] 

Hudson commented on HDFS-10493:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #9979 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9979/])
HDFS-10493. Add links to datanode web UI in namenode datanodes page. (kihwal: 
rev 280069510b71d81a150b2b28d9fa987891d82774)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js


> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0
>
> Attachments: HDFS-10493.001.patch, HDFS-10493.002.patch, 
> NN_DN_Links.jpg, dn_2_alive_1_down.002.jpg, dn_http_addr_links.002.jpg, 
> secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-06-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10440:
---
Attachment: HDFS-10440.005.patch

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> datanode_2nns.html.002.jpg, datanode_html.001.jpg, 
> datanode_loading_err.002.jpg, datanode_utilities.001.jpg, 
> datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, 
> nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-06-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336442#comment-15336442
 ] 

Weiwei Yang commented on HDFS-10440:


Hello [~kihwal]

JMX returns time passed from last block report in milliseconds, and then the JS 
calculates the relative time on browser. So even the clock is not sync between 
datanode and browser, it won't be an issue. 

I just uploaded v5 patch, including following updates
* Fix checkstyle and findbug issues
* Add a test to test the new JMX entry



> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> datanode_2nns.html.002.jpg, datanode_html.001.jpg, 
> datanode_loading_err.002.jpg, datanode_utilities.001.jpg, 
> datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, 
> nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-06-17 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10440:
---
Status: Patch Available  (was: In Progress)

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> datanode_2nns.html.002.jpg, datanode_html.001.jpg, 
> datanode_loading_err.002.jpg, datanode_utilities.001.jpg, 
> datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, 
> nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336460#comment-15336460
 ] 

Weiwei Yang commented on HDFS-10493:


Thanks a lot [~kihwal] :)

> Add links to datanode web UI in namenode datanodes page
> ---
>
> Key: HDFS-10493
> URL: https://issues.apache.org/jira/browse/HDFS-10493
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Fix For: 2.8.0
>
> Attachments: HDFS-10493.001.patch, HDFS-10493.002.patch, 
> NN_DN_Links.jpg, dn_2_alive_1_down.002.jpg, dn_http_addr_links.002.jpg, 
> secure_nn_dn_links.jpg
>
>
> HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
> links from namenode datanodes information page to individual datanode UI, to 
> check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10542) Failures in mvn install

2016-06-17 Thread Pankaj Maurya (JIRA)
Pankaj Maurya created HDFS-10542:


 Summary: Failures in mvn install
 Key: HDFS-10542
 URL: https://issues.apache.org/jira/browse/HDFS-10542
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.2
 Environment: Ubuntu 14
Reporter: Pankaj Maurya
Priority: Trivial


test failures due to webapps/test directory missing in 'mvn install'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10542) Failures in mvn install

2016-06-17 Thread Pankaj Maurya (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Maurya updated HDFS-10542:
-
Attachment: apache-hadoop-hdfs-10542.txt

Full logs attached.

> Failures in mvn install
> ---
>
> Key: HDFS-10542
> URL: https://issues.apache.org/jira/browse/HDFS-10542
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.2
> Environment: Ubuntu 14
>Reporter: Pankaj Maurya
>Priority: Trivial
> Attachments: apache-hadoop-hdfs-10542.txt
>
>
> test failures due to webapps/test directory missing in 'mvn install'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-1312) Re-balance disks within a Datanode

2016-06-17 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-1312:
---
Status: Patch Available  (was: Reopened)

> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Attachments: Architecture_and_test_update.pdf, 
> Architecture_and_testplan.pdf, HDFS-1312.001.patch, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are 
> added/replaced in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is 
> not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10540) The CLI error message for disk balancer is not enabled is not clear.

2016-06-17 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDFS-10540:
---

Assignee: Anu Engineer

> The CLI error message for disk balancer is not enabled is not clear.
> 
>
> Key: HDFS-10540
> URL: https://issues.apache.org/jira/browse/HDFS-10540
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>
> When running the {{hdfs diskbalancer}} against a DN whose disk balancer 
> feature is not enabled, it reports:
> {code}
> $ hdfs diskbalancer -plan 127.0.0.1 -uri hdfs://localhost
> 16/06/16 18:03:29 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Internal error, Unable to create JSON string.
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.getVolumeNames(DiskBalancer.java:260)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDiskBalancerSetting(DataNode.java:3105)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getDiskBalancerSetting(ClientDatanodeProtocolServerSideTranslatorPB.java:359)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17515)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> Caused by: org.apache.hadoop.hdfs.server.diskbalancer.DiskBalancerException: 
> Disk Balancer is not enabled.
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.checkDiskBalancerEnabled(DiskBalancer.java:293)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.getVolumeNames(DiskBalancer.java:251)
>   ... 11 more
> {code}
> We should not directly throw IOE to the user. And it should explicitly 
> explain the reason that the operation fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10541) When no actions in plan, error message says "Plan was generated more than 24 hours ago"

2016-06-17 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDFS-10541:
---

Assignee: Anu Engineer

> When no actions in plan, error message says "Plan was generated more than 24 
> hours ago"
> ---
>
> Key: HDFS-10541
> URL: https://issues.apache.org/jira/browse/HDFS-10541
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Minor
>
> The message is misleading. Instead, it should make it clear that there are no 
> steps (or no action) to take in this plan - and should probably not error out.
> {code}
> 16/06/16 14:56:53 INFO command.Command: Executing "execute plan" command
> Plan was generated more than 24 hours ago.
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.verifyTimeStamp(DiskBalancer.java:387)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.verifyPlan(DiskBalancer.java:315)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.submitPlan(DiskBalancer.java:173)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.submitDiskBalancerPlan(DataNode.java:3059)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.submitDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:299)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17509)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}
> This happens when a plan looks like the following one:
> {code}
> {"volumeSetPlans":[],"nodeName":"a.b.c","nodeUUID":null,"port":20001,"timeStamp":0}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10543) hdfsRead read stops at block boundary

2016-06-17 Thread Xiaowei Zhu (JIRA)
Xiaowei Zhu created HDFS-10543:
--

 Summary: hdfsRead read stops at block boundary
 Key: HDFS-10543
 URL: https://issues.apache.org/jira/browse/HDFS-10543
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaowei Zhu


Reproducer:

char *buf2 = new char[file_info->mSize];
  memset(buf2, 0, (size_t)file_info->mSize);
  int ret = hdfsRead(fs, file, buf2, file_info->mSize);
  delete [] buf2;
  if(ret != file_info->mSize) {
std::stringstream ss;
ss << "tried to read " << file_info->mSize << " bytes. but read " << 
ret << " bytes";
ReportError(ss.str());
hdfsCloseFile(fs, file);
continue;
  }

When it runs with a file ~1.4GB large, it will return an error like "tried to 
read 146890 bytes. but read 134217728 bytes". The HDFS cluster it runs 
against has a block size of 146890 bytes. So it seems hdfsRead will stop at 
a block boundary. Looks like a regression. We should add retry to continue 
reading cross blocks in case of files w/ multiple blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10526) libhdfs++: Add connect timeouts to async_connect calls

2016-06-17 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336594#comment-15336594
 ] 

James Clampffer commented on HDFS-10526:


I've just skimmed over this quickly (I'm going to go through in more detail in 
the next day or so).  I think it'd be a good idea to have an explicit constant 
in libhdfs++ for the timeout error code so we don't have a mix of libhdfs 
constants and std::errc values.  I'm in the process of finishing up HA namenode 
work and one of the things I'd like to support from the start is faster 
failover on connection timeouts using the 
"dfs.client.failover.connection.retries.on.timeouts" parameter.  Please let me 
know if I'm missing something simple here.

> libhdfs++: Add connect timeouts to async_connect calls
> --
>
> Key: HDFS-10526
> URL: https://issues.apache.org/jira/browse/HDFS-10526
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10526.HDFS-8707.000.patch, 
> HDFS-10526.HDFS-8707.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336593#comment-15336593
 ] 

Hadoop QA commented on HDFS-10440:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 6s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 48s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.TestDFSClientExcludedNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811406/HDFS-10440.005.patch |
| JIRA Issue | HDFS-10440 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 74359b20fccf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2800695 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15809/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15809/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15809/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15809/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS

[jira] [Updated] (HDFS-10543) hdfsRead read stops at block boundary

2016-06-17 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-10543:
---
Description: 
Reproducer:

char *buf2 = new char[file_info->mSize];
  memset(buf2, 0, (size_t)file_info->mSize);
  int ret = hdfsRead(fs, file, buf2, file_info->mSize);
  delete [] buf2;
  if(ret != file_info->mSize) {
std::stringstream ss;
ss << "tried to read " << file_info->mSize << " bytes. but read " << 
ret << " bytes";
ReportError(ss.str());
hdfsCloseFile(fs, file);
continue;
  }

When it runs with a file ~1.4GB large, it will return an error like "tried to 
read 146890 bytes. but read 134217728 bytes". The HDFS cluster it runs 
against has a block size of 134217728 bytes. So it seems hdfsRead will stop at 
a block boundary. Looks like a regression. We should add retry to continue 
reading cross blocks in case of files w/ multiple blocks.

  was:
Reproducer:

char *buf2 = new char[file_info->mSize];
  memset(buf2, 0, (size_t)file_info->mSize);
  int ret = hdfsRead(fs, file, buf2, file_info->mSize);
  delete [] buf2;
  if(ret != file_info->mSize) {
std::stringstream ss;
ss << "tried to read " << file_info->mSize << " bytes. but read " << 
ret << " bytes";
ReportError(ss.str());
hdfsCloseFile(fs, file);
continue;
  }

When it runs with a file ~1.4GB large, it will return an error like "tried to 
read 146890 bytes. but read 134217728 bytes". The HDFS cluster it runs 
against has a block size of 146890 bytes. So it seems hdfsRead will stop at 
a block boundary. Looks like a regression. We should add retry to continue 
reading cross blocks in case of files w/ multiple blocks.


> hdfsRead read stops at block boundary
> -
>
> Key: HDFS-10543
> URL: https://issues.apache.org/jira/browse/HDFS-10543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Xiaowei Zhu
>
> Reproducer:
> char *buf2 = new char[file_info->mSize];
>   memset(buf2, 0, (size_t)file_info->mSize);
>   int ret = hdfsRead(fs, file, buf2, file_info->mSize);
>   delete [] buf2;
>   if(ret != file_info->mSize) {
> std::stringstream ss;
> ss << "tried to read " << file_info->mSize << " bytes. but read " << 
> ret << " bytes";
> ReportError(ss.str());
> hdfsCloseFile(fs, file);
> continue;
>   }
> When it runs with a file ~1.4GB large, it will return an error like "tried to 
> read 146890 bytes. but read 134217728 bytes". The HDFS cluster it runs 
> against has a block size of 134217728 bytes. So it seems hdfsRead will stop 
> at a block boundary. Looks like a regression. We should add retry to continue 
> reading cross blocks in case of files w/ multiple blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10544) Balancer doesn't work with IPFailoverProxyProvider

2016-06-17 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-10544:


 Summary: Balancer doesn't work with IPFailoverProxyProvider
 Key: HDFS-10544
 URL: https://issues.apache.org/jira/browse/HDFS-10544
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zhe Zhang
Assignee: Zhe Zhang


Right now {{Balancer}} gets the NN URIs through {{DFSUtil#getNameServiceUris}}, 
which returns logical URIs in HA is enabled. If {{IPFailoverProxyProvider}} is 
used, {{Balancer}} will not be able to start.

I think the bug is at {{DFSUtil#getNameServiceUris}}:
{code}
for (String nsId : getNameServiceIds(conf)) {
  if (HAUtil.isHAEnabled(conf, nsId)) {
// Add the logical URI of the nameservice.
try {
  ret.add(new URI(HdfsConstants.HDFS_URI_SCHEME + "://" + nsId));
{code}

Then {{if}} clause should also consider if the {{FailoverProxyProvider}} has 
{{useLogicalURI}} enabled. If not, {{getNameServiceUris}} should try to resolve 
the physical URI for this nsId.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10545) PlanCommand should use -fs instead of -uri to be consistent with other hdfs commands

2016-06-17 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-10545:


 Summary: PlanCommand should use -fs instead of -uri to be 
consistent with other hdfs commands
 Key: HDFS-10545
 URL: https://issues.apache.org/jira/browse/HDFS-10545
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-1312
Reporter: Lei (Eddy) Xu
Priority: Minor


PlanCommand currently uses {{-uri}} to specify NameNode, while in all other 
hdfs commands (i.e., {{hdfs dfsadmin}} and {{hdfs balancer}})) they use {{-fs}} 
to specify NameNode.

It'd be better to use {{-fs}} here.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10545) PlanCommand should use -fs instead of -uri to be consistent with other hdfs commands

2016-06-17 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDFS-10545:
---

Assignee: Anu Engineer

> PlanCommand should use -fs instead of -uri to be consistent with other hdfs 
> commands
> 
>
> Key: HDFS-10545
> URL: https://issues.apache.org/jira/browse/HDFS-10545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Minor
>
> PlanCommand currently uses {{-uri}} to specify NameNode, while in all other 
> hdfs commands (i.e., {{hdfs dfsadmin}} and {{hdfs balancer}})) they use 
> {{-fs}} to specify NameNode.
> It'd be better to use {{-fs}} here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10544) Balancer doesn't work with IPFailoverProxyProvider

2016-06-17 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10544:
-
Status: Patch Available  (was: Open)

> Balancer doesn't work with IPFailoverProxyProvider
> --
>
> Key: HDFS-10544
> URL: https://issues.apache.org/jira/browse/HDFS-10544
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>
> Right now {{Balancer}} gets the NN URIs through 
> {{DFSUtil#getNameServiceUris}}, which returns logical URIs in HA is enabled. 
> If {{IPFailoverProxyProvider}} is used, {{Balancer}} will not be able to 
> start.
> I think the bug is at {{DFSUtil#getNameServiceUris}}:
> {code}
> for (String nsId : getNameServiceIds(conf)) {
>   if (HAUtil.isHAEnabled(conf, nsId)) {
> // Add the logical URI of the nameservice.
> try {
>   ret.add(new URI(HdfsConstants.HDFS_URI_SCHEME + "://" + nsId));
> {code}
> Then {{if}} clause should also consider if the {{FailoverProxyProvider}} has 
> {{useLogicalURI}} enabled. If not, {{getNameServiceUris}} should try to 
> resolve the physical URI for this nsId.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10544) Balancer doesn't work with IPFailoverProxyProvider

2016-06-17 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10544:
-
Attachment: HDFS-10544.00.patch

Submitting initial patch.

{{HAUtil#useLogicalUri}} can potentially be optimized to avoid creating an 
instance of {{AbstractNNFailoverProxyProvider}}.

> Balancer doesn't work with IPFailoverProxyProvider
> --
>
> Key: HDFS-10544
> URL: https://issues.apache.org/jira/browse/HDFS-10544
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-10544.00.patch
>
>
> Right now {{Balancer}} gets the NN URIs through 
> {{DFSUtil#getNameServiceUris}}, which returns logical URIs in HA is enabled. 
> If {{IPFailoverProxyProvider}} is used, {{Balancer}} will not be able to 
> start.
> I think the bug is at {{DFSUtil#getNameServiceUris}}:
> {code}
> for (String nsId : getNameServiceIds(conf)) {
>   if (HAUtil.isHAEnabled(conf, nsId)) {
> // Add the logical URI of the nameservice.
> try {
>   ret.add(new URI(HdfsConstants.HDFS_URI_SCHEME + "://" + nsId));
> {code}
> Then {{if}} clause should also consider if the {{FailoverProxyProvider}} has 
> {{useLogicalURI}} enabled. If not, {{getNameServiceUris}} should try to 
> resolve the physical URI for this nsId.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7959) WebHdfs logging is missing on Datanode

2016-06-17 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336674#comment-15336674
 ] 

Sangjin Lee commented on HDFS-7959:
---

Thanks [~kihwal] for updating the patch! It looks good to me for the most part. 
I only have a couple of minor questions.

{code}
125 String host = 
((InetSocketAddress)ctx.channel().remoteAddress()).
126 getAddress().getHostAddress();
127 HttpMethod method = req.method();
{code}
Could this throw an exception (esp. host) in some situations? I think it might 
be a good idea to move this into the {{finally}} clause itself and make an 
exception here non-fatal. Thoughts?

{code}
140 if (resp == null) {
141   return "000";
142 }
{code}
"000" is not a valid HTTP status code. If the response is null for any reason, 
"500" might be as good a choice as any. Also, if we change it to 500, we can 
then simply use {{int}} as the return type of {{getResponseCode()}}.

> WebHdfs logging is missing on Datanode
> --
>
> Key: HDFS-7959
> URL: https://issues.apache.org/jira/browse/HDFS-7959
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7959.1.branch-2.patch, HDFS-7959.1.trunk.patch, 
> HDFS-7959.branch-2.patch, HDFS-7959.patch, HDFS-7959.patch, HDFS-7959.patch, 
> HDFS-7959.trunk.patch
>
>
> After the conversion to netty, webhdfs requests are not logged on datanodes. 
> The existing jetty log only logs the non-webhdfs requests that come through 
> the internal proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9924) [umbrella] Nonblocking HDFS Access

2016-06-17 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9924:

Attachment: Async-HDFS-Performance-Report.pdf

> [umbrella] Nonblocking HDFS Access
> --
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: Async-HDFS-Performance-Report.pdf, AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Nonblocking HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support nonblocking calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9924) [umbrella] Nonblocking HDFS Access

2016-06-17 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9924:

Attachment: (was: Async-HDFS-Performance-Report.pdf)

> [umbrella] Nonblocking HDFS Access
> --
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: Async-HDFS-Performance-Report.pdf, AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Nonblocking HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support nonblocking calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Nonblocking HDFS Access

2016-06-17 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336684#comment-15336684
 ] 

Xiaobing Zhou commented on HDFS-9924:
-

bq. Hardware is listed as "32 X 8 cores Intel(R) Xeon(R) CPU E5­2630 v3 @ 
2.40GHz", could you clarify? 32 CPUs, or this is a typo?
To make it clear, I posted updated doc. It is:
Intel(R) Xeon(R) CPU E52630 v3 @ 2.40GHz (32 Virtual Processors, 2 CPUs, 8 
cores per CPU) with HT (HyperThreading) enabled

{quote}
Considering the best-case speedup ranges from 30-60x, I'm betting the sweet 
spot is closer to 30-60 threads. I'd be interested in seeing e.g. 25, 50, 100, 
250. Expectation to see an upside-down U-shaped curve.
{quote}
I can't agree sweet spot is closer to 30-60 threads unless experiments 
demonstrate it.

Regarding the branching, it makes much sense not keeping it in branch, making 
it quite easy and possible to try on various async API implementation down the 
road. Let's say, someones want to try deferred or completable future or 
whatever others, they anyway need work already done here in the scope of RPC 
and retry in HA. If these reusable work are in separate branch, won't they have 
to deal with tripartite (even more) branch merge/sync (trunk/branch-2, 
HDFS-9924 and their feature branches)? It actually makes it more complicated 
and less efficient.

I also don't understand why the resistance to make reusable parts stay in 
branch-2/trunk is that intensive especially HDFS-10538 has completely removed 
the API. I was thinking, so much time we spent here for arguing, alternatively, 
I already had much work done to implement the other proposals. Don't we want to 
move things very quickly to deliver better products than our competitors? I 
understand it's hard to balance all requirements from all parties, but the API 
is already removed, which was the key point of concerns/arguments from all 
rounds. So please kindly consider keeping the reusable parts due to 
benefits/reasons aforementioned. Thanks.

> [umbrella] Nonblocking HDFS Access
> --
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: Async-HDFS-Performance-Report.pdf, AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Nonblocking HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support nonblocking calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode

2016-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336751#comment-15336751
 ] 

Hadoop QA commented on HDFS-1312:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 1s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s 
{color} | {color:red} hadoop-hdfs-project: The patch generated 19 new + 1000 
unchanged - 2 fixed = 1019 total (was 1002) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
13s {color} | {color:green} The patch generated 0 new + 75 unchanged - 1 fixed 
= 75 total (was 76) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 10 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 24s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 51s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12810943/HDFS-1312.001.patch |
| JIRA Issue | HDFS-1312 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  shellcheck  shelldocs  |
| uname | Linux cf6241f0ffa9 3.13.0-36-lowlaten

[jira] [Commented] (HDFS-10544) Balancer doesn't work with IPFailoverProxyProvider

2016-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336752#comment-15336752
 ] 

Hadoop QA commented on HDFS-10544:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 38s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 38s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 40s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 20s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 40s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 13s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811433/HDFS-10544.00.patch |
| JIRA Issue | HDFS-10544 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 375e2b323f2b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2800695 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15811/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15811/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15811/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15811/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15811/artifact/patchprocess/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15811/artifact/patchprocess/patch-unit-hadoop-hdfs-project_

[jira] [Commented] (HDFS-10328) Add per-cache-pool default replication num configuration

2016-06-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336775#comment-15336775
 ] 

Colin Patrick McCabe commented on HDFS-10328:
-

+1 pending jenkins.  Thanks, [~xupener].

> Add per-cache-pool default replication num configuration
> 
>
> Key: HDFS-10328
> URL: https://issues.apache.org/jira/browse/HDFS-10328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching
>Reporter: xupeng
>Assignee: xupeng
>Priority: Minor
> Fix For: 2.3.0
>
> Attachments: HDFS-10328.001.patch, HDFS-10328.002.patch, 
> HDFS-10328.003.patch, HDFS-10328.004.patch
>
>
> For now, hdfs cacheadmin can not set a default replication num for cached 
> directive in the same cachepool. Each cache directive added in the same cache 
> pool should set their own replication num individually. 
> Consider this situation, we add daily hive table into cache pool "hive" .Each 
> time i should set the same replication num for every table directive in the 
> same cache pool.  
> I think we should enable setting a default replication num for a cachepool 
> that every cache directive in the pool can inherit replication configuration 
> from the pool. Also cache directive can override replication configuration 
> explicitly by calling "add & modify  directive -replication" command from 
> cli.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode

2016-06-17 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336789#comment-15336789
 ] 

Anu Engineer commented on HDFS-1312:


* Checkstyle issues are mostly - variable hides a field.
* White Space warning is due to doc changes.
* Test failures are not related to this patch

> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Attachments: Architecture_and_test_update.pdf, 
> Architecture_and_testplan.pdf, HDFS-1312.001.patch, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are 
> added/replaced in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is 
> not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10328) Add per-cache-pool default replication num configuration

2016-06-17 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-10328:

Attachment: HDFS-10328.004.patch

Reposting patch 004 (and rebasing on trunk) to get a Jenkins run

> Add per-cache-pool default replication num configuration
> 
>
> Key: HDFS-10328
> URL: https://issues.apache.org/jira/browse/HDFS-10328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching
>Reporter: xupeng
>Assignee: xupeng
>Priority: Minor
> Fix For: 2.3.0
>
> Attachments: HDFS-10328.001.patch, HDFS-10328.002.patch, 
> HDFS-10328.003.patch, HDFS-10328.004.patch, HDFS-10328.004.patch
>
>
> For now, hdfs cacheadmin can not set a default replication num for cached 
> directive in the same cachepool. Each cache directive added in the same cache 
> pool should set their own replication num individually. 
> Consider this situation, we add daily hive table into cache pool "hive" .Each 
> time i should set the same replication num for every table directive in the 
> same cache pool.  
> I think we should enable setting a default replication num for a cachepool 
> that every cache directive in the pool can inherit replication configuration 
> from the pool. Also cache directive can override replication configuration 
> explicitly by calling "add & modify  directive -replication" command from 
> cli.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10328) Add per-cache-pool default replication num configuration

2016-06-17 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-10328:

Attachment: (was: HDFS-10328.004.patch)

> Add per-cache-pool default replication num configuration
> 
>
> Key: HDFS-10328
> URL: https://issues.apache.org/jira/browse/HDFS-10328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching
>Reporter: xupeng
>Assignee: xupeng
>Priority: Minor
> Fix For: 2.3.0
>
> Attachments: HDFS-10328.001.patch, HDFS-10328.002.patch, 
> HDFS-10328.003.patch, HDFS-10328.004.patch
>
>
> For now, hdfs cacheadmin can not set a default replication num for cached 
> directive in the same cachepool. Each cache directive added in the same cache 
> pool should set their own replication num individually. 
> Consider this situation, we add daily hive table into cache pool "hive" .Each 
> time i should set the same replication num for every table directive in the 
> same cache pool.  
> I think we should enable setting a default replication num for a cachepool 
> that every cache directive in the pool can inherit replication configuration 
> from the pool. Also cache directive can override replication configuration 
> explicitly by calling "add & modify  directive -replication" command from 
> cli.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10524) libhdfs++: Implement chmod and chown

2016-06-17 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336809#comment-15336809
 ] 

Bob Hansen commented on HDFS-10524:
---

Thanks, [~anatoli.shein].  We're getting pretty close.

The implementations in hdfs_shim.c of hdfsConnect, 
hdfsConnectAsUserNewInstance, and hdfsConnectNewInstance are returning invalid 
pointers.  For the shim methods, we should be returning a shim hdfsFs_internal 
pointer, and your implementations of those are returning a libhdfs++ hdfsFS 
pointer.

hdfsFreeBuilder should free the bld passed-in pointer when it is finished.

hdfsDisconnect should try to disconnect the libhdfspp instance even if the 
libhdfs instance failed to disconnect.  It should only return 0 if both 
returned 0.




> libhdfs++: Implement chmod and chown
> 
>
> Key: HDFS-10524
> URL: https://issues.apache.org/jira/browse/HDFS-10524
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10524.HDFS-8707.000.patch, 
> HDFS-10524.HDFS-8707.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10515) libhdfs++: Implement mkdirs, rmdir, rename, and remove

2016-06-17 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336827#comment-15336827
 ] 

Bob Hansen commented on HDFS-10515:
---

In:
{code}
dirList = hdfsListDirectory(fs, listDirTest, &numEntries);
 EXPECT_NONNULL(dirList);
if (numEntries != 1) {
 fprintf(stderr, "hdfsListDirectory set numEntries to "
"%d on directory containing 1 files.", numEntries);
return EIO;
 }
 hdfsFreeFileInfo(dirList, numEntries);
{code}
we should free the dirList before returning EIO.  We can put the free up above 
the if(numEntires...) check since we're not looking at the contents, just the 
returned file count.

Can we test privilege failures in TestMkdirs, TestDelete, and TestRename by 
settings privs to 000?



> libhdfs++: Implement mkdirs, rmdir, rename, and remove
> --
>
> Key: HDFS-10515
> URL: https://issues.apache.org/jira/browse/HDFS-10515
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10515.HDFS-8707.000.patch, 
> HDFS-10515.HDFS-8707.001.patch, HDFS-10515.HDFS-8707.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Nonblocking HDFS Access

2016-06-17 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336830#comment-15336830
 ] 

Xiaobing Zhou commented on HDFS-9924:
-

Hi [~stack]
bq. It looks like the difference between 'async' and thread pool is negligible; 
10% at the extreme. Is that how you interpret the results?
Yes.

bq. As per Andrew Wang, would be interested in what happens when less threads 
(especially as NN is set up with 300 handlers...); tendency seems to be the 
less threads you use, the better it does.
I will probably run benchmark for cases of less threads, but the tendency may 
not be true. Thank you.


> [umbrella] Nonblocking HDFS Access
> --
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: Async-HDFS-Performance-Report.pdf, AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Nonblocking HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support nonblocking calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10511) libhdfs++: make error returning mechanism consistent across all hdfs operations

2016-06-17 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336837#comment-15336837
 ] 

Bob Hansen commented on HDFS-10511:
---

hadoop-hdfs-native-client/src/main/native/libhdfs/include/hdfs/hdfs.h is part 
of libhdfs, and we're fairly committed to not changing it.

The existing text of "nonzero error code otherwise" is sufficient to cover the 
behavior of returning -1 on error.

The functions in hdfs_ext.h are fair game, and your updated comments are 
well-received.



> libhdfs++: make error returning mechanism consistent across all hdfs 
> operations
> ---
>
> Key: HDFS-10511
> URL: https://issues.apache.org/jira/browse/HDFS-10511
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10511.HDFS-8707.000.patch, 
> HDFS-10511.HDFS-8707.000.patch, HDFS-10511.HDFS-8707.001.patch, 
> HDFS-10511.HDFS-8707.002.patch
>
>
> Errno should always be set.
> If function is returning a code on stack, it should be consistent with errno.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2016-06-17 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336938#comment-15336938
 ] 

Kihwal Lee commented on HDFS-6440:
--

HDFS-10536 got filed. I just looked at the edit log tailor change alone and saw 
multiple potential issues. I will file jiras for what I can spot, but it looks 
like this needs a lot more testing and hardening.  If bringing this to branch-2 
will facilitate its maturing process, I am for it.  But I expect it will entail 
a lot of work.  If there are enough people who are interested in this feature, 
may be we can move forward.

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0-alpha1
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-06-17 Thread Andre Araujo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336963#comment-15336963
 ] 

Andre Araujo commented on HDFS-6962:


Is there any possible workarounds for this?

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.1.patch
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Nonblocking HDFS Access

2016-06-17 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336976#comment-15336976
 ] 

Arpit Agarwal commented on HDFS-9924:
-

Hi [~andrew.wang], Jitendra's proposal to revert the API while we continue the 
discussion sounds like a reasonable compromise. I don't think a feature branch 
buys us anything here. Our bylaws state - _Significant, pervasive features are 
often developed in a speculative branch of the repository_ - and these changes 
meet neither bar. Please consider holding off on your proposed revert. I will 
take a look at HADOOP-10538 which reverts the API and commit it within a couple 
of days.

> [umbrella] Nonblocking HDFS Access
> --
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: Async-HDFS-Performance-Report.pdf, AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Nonblocking HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support nonblocking calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10328) Add per-cache-pool default replication num configuration

2016-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336984#comment-15336984
 ] 

Hadoop QA commented on HDFS-10328:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 32s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 49s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811444/HDFS-10328.004.patch |
| JIRA Issue | HDFS-10328 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux b67b47cf54c2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2800695 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| u

[jira] [Comment Edited] (HDFS-9924) [umbrella] Nonblocking HDFS Access

2016-06-17 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336976#comment-15336976
 ] 

Arpit Agarwal edited comment on HDFS-9924 at 6/17/16 9:26 PM:
--

Hi [~andrew.wang], Jitendra's proposal to revert the API while we continue the 
discussion sounds like a reasonable compromise. I don't think a feature branch 
buys us anything here. Our bylaws state - _Significant, pervasive features are 
often developed in a speculative branch of the repository_ - and these changes 
meet neither bar. Please consider holding off on your proposed revert. I will 
take a look at HDFS-10538 which reverts the API and commit it within a couple 
of days.


was (Author: arpitagarwal):
Hi [~andrew.wang], Jitendra's proposal to revert the API while we continue the 
discussion sounds like a reasonable compromise. I don't think a feature branch 
buys us anything here. Our bylaws state - _Significant, pervasive features are 
often developed in a speculative branch of the repository_ - and these changes 
meet neither bar. Please consider holding off on your proposed revert. I will 
take a look at HADOOP-10538 which reverts the API and commit it within a couple 
of days.

> [umbrella] Nonblocking HDFS Access
> --
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: Async-HDFS-Performance-Report.pdf, AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Nonblocking HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support nonblocking calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode

2016-06-17 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15337016#comment-15337016
 ] 

Allen Wittenauer commented on HDFS-1312:


bq. White Space warning is due to doc changes.

You should still remove them.

> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Attachments: Architecture_and_test_update.pdf, 
> Architecture_and_testplan.pdf, HDFS-1312.001.patch, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are 
> added/replaced in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is 
> not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode

2016-06-17 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15337024#comment-15337024
 ] 

Anu Engineer commented on HDFS-1312:


[~aw] Thanks will do.

> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Attachments: Architecture_and_test_update.pdf, 
> Architecture_and_testplan.pdf, HDFS-1312.001.patch, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are 
> added/replaced in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is 
> not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10538) Remove AsyncDistributedFileSystem API

2016-06-17 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15337039#comment-15337039
 ] 

Jitendra Nath Pandey commented on HDFS-10538:
-

+1 for the proposed patch. 
[~andrew.wang], [~arpiagariu], I can commit it if you have no objection.

> Remove AsyncDistributedFileSystem API
> -
>
> Key: HDFS-10538
> URL: https://issues.apache.org/jira/browse/HDFS-10538
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10538-HDFS-9924.000.patch, 
> HDFS-10538-HDFS-9924.001.patch, HDFS-10538-HDFS-9924.002.patch
>
>
> Per discussion in HDFS-9924,  this JIRA is to remove 
> AsyncDistributedFileSystem to make it invisible by moving it to tests, 
> however, the underlying implementation can be reused by any proposed async 
> API. It makes much sense to keep it, making it easy and possible to try on 
> various async API implementation down the road.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Nonblocking HDFS Access

2016-06-17 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15337043#comment-15337043
 ] 

Jitendra Nath Pandey commented on HDFS-9924:


I am ok to use the feature branch for the API development. The current code is 
already in branch HDFS-9924, as [~andrew.wang] ported it earlier. The revert in 
HDFS-10538 will effectively achieve the goal of moving the API development to 
the branch. I am +1 for the patch in HDFS-10538, and can commit it if no-one 
objects.

> [umbrella] Nonblocking HDFS Access
> --
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: Async-HDFS-Performance-Report.pdf, AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Nonblocking HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support nonblocking calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10544) Balancer doesn't work with IPFailoverProxyProvider

2016-06-17 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10544:
-
Attachment: HDFS-10544.01.patch

v0 patch didn't work because {{HAUtil#useLogicalUri}} expects an URI rather 
than String (btw sorry I missed the build failure before uploading patch). I 
don't think this is a good API design, because nameservice ID can be an 
arbitrary string. 

But to avoid changing public API {{HAUtil#useLogicalUri}}, in v1 patch I added 
another API to get the boolean condition from string. Also using the static 
method to check, rather than creating an instance of the proxy provider.

> Balancer doesn't work with IPFailoverProxyProvider
> --
>
> Key: HDFS-10544
> URL: https://issues.apache.org/jira/browse/HDFS-10544
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-10544.00.patch, HDFS-10544.01.patch
>
>
> Right now {{Balancer}} gets the NN URIs through 
> {{DFSUtil#getNameServiceUris}}, which returns logical URIs in HA is enabled. 
> If {{IPFailoverProxyProvider}} is used, {{Balancer}} will not be able to 
> start.
> I think the bug is at {{DFSUtil#getNameServiceUris}}:
> {code}
> for (String nsId : getNameServiceIds(conf)) {
>   if (HAUtil.isHAEnabled(conf, nsId)) {
> // Add the logical URI of the nameservice.
> try {
>   ret.add(new URI(HdfsConstants.HDFS_URI_SCHEME + "://" + nsId));
> {code}
> Then {{if}} clause should also consider if the {{FailoverProxyProvider}} has 
> {{useLogicalURI}} enabled. If not, {{getNameServiceUris}} should try to 
> resolve the physical URI for this nsId.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10538) Remove AsyncDistributedFileSystem API

2016-06-17 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15337062#comment-15337062
 ] 

Xiaobing Zhou commented on HDFS-10538:
--

The test failure is not related to this patch, but will be addressed in 
HDFS-10497. Thank you all for review.

> Remove AsyncDistributedFileSystem API
> -
>
> Key: HDFS-10538
> URL: https://issues.apache.org/jira/browse/HDFS-10538
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10538-HDFS-9924.000.patch, 
> HDFS-10538-HDFS-9924.001.patch, HDFS-10538-HDFS-9924.002.patch
>
>
> Per discussion in HDFS-9924,  this JIRA is to remove 
> AsyncDistributedFileSystem to make it invisible by moving it to tests, 
> however, the underlying implementation can be reused by any proposed async 
> API. It makes much sense to keep it, making it easy and possible to try on 
> various async API implementation down the road.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10544) Balancer doesn't work with IPFailoverProxyProvider

2016-06-17 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10544:
-
Attachment: HDFS-10544.02.patch

Uploading v2 patch. So we have 2 alternatives:
* v1 patch: add {{useLogicalUri}} to take String as parameter
* v2 patch: throw {{IllegalArgumentException}} before trying to determine 
whether to use logical URI.

I'm hesitating between the 2 options because the documentation is not 
completely clear about {{dfs.nameservices}}. Could it be an arbitrary string? 
If that arbitrary string fails to form a legal URI (when prefixed with 
{{hdfs://}}), shall we treat it just as a Runtime exception, or we should look 
for that arbitrary string prefixed with {{dfs.namenode.servicerpc-address}} in 
the config?

Any suggestions are very welcome.

> Balancer doesn't work with IPFailoverProxyProvider
> --
>
> Key: HDFS-10544
> URL: https://issues.apache.org/jira/browse/HDFS-10544
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-10544.00.patch, HDFS-10544.01.patch, 
> HDFS-10544.02.patch
>
>
> Right now {{Balancer}} gets the NN URIs through 
> {{DFSUtil#getNameServiceUris}}, which returns logical URIs in HA is enabled. 
> If {{IPFailoverProxyProvider}} is used, {{Balancer}} will not be able to 
> start.
> I think the bug is at {{DFSUtil#getNameServiceUris}}:
> {code}
> for (String nsId : getNameServiceIds(conf)) {
>   if (HAUtil.isHAEnabled(conf, nsId)) {
> // Add the logical URI of the nameservice.
> try {
>   ret.add(new URI(HdfsConstants.HDFS_URI_SCHEME + "://" + nsId));
> {code}
> Then {{if}} clause should also consider if the {{FailoverProxyProvider}} has 
> {{useLogicalURI}} enabled. If not, {{getNameServiceUris}} should try to 
> resolve the physical URI for this nsId.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10220) A large number of expired leases can make namenode unresponsive and cause failover

2016-06-17 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15337125#comment-15337125
 ] 

Zhe Zhang commented on HDFS-10220:
--

[~ashangit], [~raviprak] I think this is a valid fix for 2.8~2.6. LMK if you 
have time to backport. I'm happy to do it too. Thanks.

> A large number of expired leases can make namenode unresponsive and cause 
> failover
> --
>
> Key: HDFS-10220
> URL: https://issues.apache.org/jira/browse/HDFS-10220
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Nicolas Fraison
>Assignee: Nicolas Fraison
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-10220.001.patch, HADOOP-10220.002.patch, 
> HADOOP-10220.003.patch, HADOOP-10220.004.patch, HADOOP-10220.005.patch, 
> HADOOP-10220.006.patch, HADOOP-10220.007.patch, threaddump_zkfc.txt
>
>
> I have faced a namenode failover due to unresponsive namenode detected by the 
> zkfc with lot's of WARN messages (5 millions) like this one:
> _org.apache.hadoop.hdfs.StateChange: BLOCK* internalReleaseLease: All 
> existing blocks are COMPLETE, lease removed, file closed._
> On the threaddump taken by the zkfc there are lots of thread blocked due to a 
> lock.
> Looking at the code, there are a lock taken by the LeaseManager.Monitor when 
> some lease must be released. Due to the really big number of lease to be 
> released the namenode has taken too many times to release them blocking all 
> other tasks and making the zkfc thinking that the namenode was not 
> available/stuck.
> The idea of this patch is to limit the number of leased released each time we 
> check for lease so the lock won't be taken for a too long time period.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile

2016-06-17 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-10534:
--
Attachment: HDFS-10534.02.patch

> NameNode WebUI should display DataNode usage rate with a certain percentile
> ---
>
> Key: HDFS-10534
> URL: https://issues.apache.org/jira/browse/HDFS-10534
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Zhe Zhang
>Assignee: Kai Sasaki
> Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch
>
>
> In addition of *Min/Median/Max*, another meaningful metric for cluster 
> balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should 
> add a config option, and another filed on NN WebUI, to display this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10546) hadoop-hdfs-native-client fails distro build when trying to copy libhdfs binaries.

2016-06-17 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-10546:


 Summary: hadoop-hdfs-native-client fails distro build when trying 
to copy libhdfs binaries.
 Key: HDFS-10546
 URL: https://issues.apache.org/jira/browse/HDFS-10546
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Blocker


During the distro build, hadoop-hdfs-native-client copies the built libhdfs 
binary artifacts for inclusion in the distro.  It references an incorrect path 
though.  The copy fails and the build aborts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10546) hadoop-hdfs-native-client fails distro build when trying to copy libhdfs binaries.

2016-06-17 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-10546.
--
Resolution: Duplicate

I just realized this duplicates HDFS-10353, which fixed the problem in trunk.  
We just need to cherry-pick that patch down to branch-2 and branch-2.8.  I'll 
cover it over there.

> hadoop-hdfs-native-client fails distro build when trying to copy libhdfs 
> binaries.
> --
>
> Key: HDFS-10546
> URL: https://issues.apache.org/jira/browse/HDFS-10546
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Blocker
>
> During the distro build, hadoop-hdfs-native-client copies the built libhdfs 
> binary artifacts for inclusion in the distro.  It references an incorrect 
> path though.  The copy fails and the build aborts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10547) DiskBalancer: fix whitespace issue in doc files

2016-06-17 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10547:
---

 Summary: DiskBalancer: fix whitespace issue in doc files
 Key: HDFS-10547
 URL: https://issues.apache.org/jira/browse/HDFS-10547
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer & mover
Affects Versions: HDFS-1312
Reporter: Anu Engineer
Assignee: Anu Engineer
Priority: Minor
 Fix For: HDFS-1312


Fix the tabs are used warning in build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10544) Balancer doesn't work with IPFailoverProxyProvider

2016-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15337193#comment-15337193
 ] 

Hadoop QA commented on HDFS-10544:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 
new + 78 unchanged - 0 fixed = 80 total (was 78) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 54s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 32s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 6s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Dead store to provider in 
org.apache.hadoop.hdfs.HAUtil.useLogicalUri(Configuration, String)  At 
HAUtil.java:org.apache.hadoop.hdfs.HAUtil.useLogicalUri(Configuration, String)  
At HAUtil.java:[line 282] |
| Failed junit tests | hadoop.hdfs.TestAsyncHDFSWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811472/HDFS-10544.01.patch |
| JIRA Issue | HDFS-10544 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2152bb126f09 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2800695 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15816/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15816/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15816/artifact/patchprocess/patch

[jira] [Updated] (HDFS-10353) Fix hadoop-hdfs-native-client compilation on Windows

2016-06-17 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-10353:
-
Affects Version/s: (was: 3.0.0-alpha1)
   2.8.0
Fix Version/s: (was: 3.0.0-alpha1)
   2.8.0

[~ajisakaa], you are correct.  I cherry-picked it to branch-2 and branch-2.8.  
I confirmed that both branches build the distro successfully on Windows.

> Fix hadoop-hdfs-native-client compilation on Windows
> 
>
> Key: HDFS-10353
> URL: https://issues.apache.org/jira/browse/HDFS-10353
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0
> Environment: windows
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HDFS-10353.patch
>
>
> After HADOOP-12892,,hadoop-hdfs-native-client compilation failing by throwing 
> the following...
> {noformat}
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
> hadoop-hdfs-native-client: An Ant BuildException has occured: 
> F:\GitCode\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant 
> BuildException has occured: 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\native\target\bin\RelWithDebInfo
>  does not exist.
> around Ant part ... todir="F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target/bin">...
>  @ 14:98 in 
> F:\Trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> at 
> org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:355)
> at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:106)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> ... 19 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10540) The CLI error message for disk balancer is not enabled is not clear.

2016-06-17 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10540:

Status: Patch Available  (was: Open)

> The CLI error message for disk balancer is not enabled is not clear.
> 
>
> Key: HDFS-10540
> URL: https://issues.apache.org/jira/browse/HDFS-10540
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
> Attachments: HDFS-10540-HDFS-1312.001.patch
>
>
> When running the {{hdfs diskbalancer}} against a DN whose disk balancer 
> feature is not enabled, it reports:
> {code}
> $ hdfs diskbalancer -plan 127.0.0.1 -uri hdfs://localhost
> 16/06/16 18:03:29 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Internal error, Unable to create JSON string.
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.getVolumeNames(DiskBalancer.java:260)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDiskBalancerSetting(DataNode.java:3105)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getDiskBalancerSetting(ClientDatanodeProtocolServerSideTranslatorPB.java:359)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17515)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> Caused by: org.apache.hadoop.hdfs.server.diskbalancer.DiskBalancerException: 
> Disk Balancer is not enabled.
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.checkDiskBalancerEnabled(DiskBalancer.java:293)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.getVolumeNames(DiskBalancer.java:251)
>   ... 11 more
> {code}
> We should not directly throw IOE to the user. And it should explicitly 
> explain the reason that the operation fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10540) The CLI error message for disk balancer is not enabled is not clear.

2016-06-17 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10540:

Attachment: HDFS-10540-HDFS-1312.001.patch

[~eddyxu] Thanks for the bug report. The attached patch fixes the issue. 
The new output looks like this.
{noformat}
[hdfs@y124 ~]$ hdfs diskbalancer -plan `hostname` -uri hdfs://namenode.com:9820 
-v
16/06/17 22:36:09 ERROR tools.DiskBalancer: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.diskbalancer.DiskBalancerException):
 Disk Balancer is not enabled.
at 
org.apache.hadoop.hdfs.server.datanode.DiskBalancer.checkDiskBalancerEnabled(DiskBalancer.java:295)
at 
org.apache.hadoop.hdfs.server.datanode.DiskBalancer.getVolumeNames(DiskBalancer.java:251)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDiskBalancerSetting(DataNode.java:3382)
at 
org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getDiskBalancerSetting(ClientDatanodeProtocolServerSideTranslatorPB.java:316)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:13209)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:664)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419)
{noformat}

> The CLI error message for disk balancer is not enabled is not clear.
> 
>
> Key: HDFS-10540
> URL: https://issues.apache.org/jira/browse/HDFS-10540
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
> Attachments: HDFS-10540-HDFS-1312.001.patch
>
>
> When running the {{hdfs diskbalancer}} against a DN whose disk balancer 
> feature is not enabled, it reports:
> {code}
> $ hdfs diskbalancer -plan 127.0.0.1 -uri hdfs://localhost
> 16/06/16 18:03:29 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Internal error, Unable to create JSON string.
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.getVolumeNames(DiskBalancer.java:260)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDiskBalancerSetting(DataNode.java:3105)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getDiskBalancerSetting(ClientDatanodeProtocolServerSideTranslatorPB.java:359)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17515)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> Caused by: org.apache.hadoop.hdfs.server.diskbalancer.DiskBalancerException: 
> Disk Balancer is not enabled.
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.checkDiskBalancerEnabled(DiskBalancer.java:293)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.getVolumeNames(DiskBalancer.java:251)
>   ... 11 more
> {code}
> We should not directly throw IOE to the user. And it should explicitly 
> explain the reason that the operation fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10540) Diskbalancer: The CLI error message for disk balancer is not enabled is not clear.

2016-06-17 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10540:

Summary: Diskbalancer: The CLI error message for disk balancer is not 
enabled is not clear.  (was: The CLI error message for disk balancer is not 
enabled is not clear.)

> Diskbalancer: The CLI error message for disk balancer is not enabled is not 
> clear.
> --
>
> Key: HDFS-10540
> URL: https://issues.apache.org/jira/browse/HDFS-10540
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
> Attachments: HDFS-10540-HDFS-1312.001.patch
>
>
> When running the {{hdfs diskbalancer}} against a DN whose disk balancer 
> feature is not enabled, it reports:
> {code}
> $ hdfs diskbalancer -plan 127.0.0.1 -uri hdfs://localhost
> 16/06/16 18:03:29 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Internal error, Unable to create JSON string.
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.getVolumeNames(DiskBalancer.java:260)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDiskBalancerSetting(DataNode.java:3105)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getDiskBalancerSetting(ClientDatanodeProtocolServerSideTranslatorPB.java:359)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17515)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> Caused by: org.apache.hadoop.hdfs.server.diskbalancer.DiskBalancerException: 
> Disk Balancer is not enabled.
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.checkDiskBalancerEnabled(DiskBalancer.java:293)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.getVolumeNames(DiskBalancer.java:251)
>   ... 11 more
> {code}
> We should not directly throw IOE to the user. And it should explicitly 
> explain the reason that the operation fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10540) Diskbalancer: The CLI error message for disk balancer is not enabled is not clear.

2016-06-17 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15337210#comment-15337210
 ] 

Anu Engineer edited comment on HDFS-10540 at 6/17/16 11:30 PM:
---

[~eddyxu] Thanks for the bug report. The attached patch fixes the issue. 
The new output looks like this.
{noformat}
[hdfs@y124 ~]$ hdfs diskbalancer -plan `hostname` -uri hdfs://namenode.com:9820 
-v
16/06/17 22:36:09 ERROR tools.DiskBalancer: 
org.apache.hadoop.ipc.RemoteException(
org.apache.hadoop.hdfs.server.diskbalancer.DiskBalancerException): 
Disk Balancer is not enabled.
at 
org.apache.hadoop.hdfs.server.datanode.DiskBalancer.checkDiskBalancerEnabled(DiskBalancer.java:295)
at 
org.apache.hadoop.hdfs.server.datanode.DiskBalancer.getVolumeNames(DiskBalancer.java:251)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDiskBalancerSetting(DataNode.java:3382)
at 
org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getDiskBalancerSetting(ClientDatanodeProtocolServerSideTranslatorPB.java:316)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:13209)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:664)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419)
{noformat}


was (Author: anu):
[~eddyxu] Thanks for the bug report. The attached patch fixes the issue. 
The new output looks like this.
{noformat}
[hdfs@y124 ~]$ hdfs diskbalancer -plan `hostname` -uri hdfs://namenode.com:9820 
-v
16/06/17 22:36:09 ERROR tools.DiskBalancer: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.diskbalancer.DiskBalancerException):
 Disk Balancer is not enabled.
at 
org.apache.hadoop.hdfs.server.datanode.DiskBalancer.checkDiskBalancerEnabled(DiskBalancer.java:295)
at 
org.apache.hadoop.hdfs.server.datanode.DiskBalancer.getVolumeNames(DiskBalancer.java:251)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDiskBalancerSetting(DataNode.java:3382)
at 
org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getDiskBalancerSetting(ClientDatanodeProtocolServerSideTranslatorPB.java:316)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:13209)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:664)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419)
{noformat}

> Diskbalancer: The CLI error message for disk balancer is not enabled is not 
> clear.
> --
>
> Key: HDFS-10540
> URL: https://issues.apache.org/jira/browse/HDFS-10540
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
> Attachments: HDFS-10540-HDFS-1312.001.patch
>
>
> When running the {{hdfs diskbalancer}} against a DN whose disk balancer 
> feature is not enabled, it reports:
> {code}
> $ hdfs diskbalancer -plan 127.0.0.1 -uri hdfs://localhost
> 16/06/16 18:03:29 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Internal error, Unable to create JSON string.
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.getVolumeNames(DiskBalancer.java:260)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDiskBalancerSetting(DataNode.java:3105)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getDiskBalancerSetting(ClientDatanodeProtocolServerSideTranslatorPB.java:359)
>   at 
> org.apache.hadoop.

[jira] [Commented] (HDFS-10544) Balancer doesn't work with IPFailoverProxyProvider

2016-06-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15337227#comment-15337227
 ] 

Hadoop QA commented on HDFS-10544:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 55s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 17s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSUtil |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12811475/HDFS-10544.02.patch |
| JIRA Issue | HDFS-10544 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8dffcec55b50 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2800695 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15817/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15817/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15817/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15817/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Balancer doesn't work with IPFailoverProxyProvider
> --
>
> Key: HDFS-10544
> 

[jira] [Commented] (HDFS-9924) [umbrella] Nonblocking HDFS Access

2016-06-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15337237#comment-15337237
 ] 

Andrew Wang commented on HDFS-9924:
---

We can set up a Jenkins job against the branch if the concern is running the 
unit tests. We have precommit against branches too.

The proposal of moving the code into a test package and then back when it's 
ready is, to put it one way, quite a novel way of doing feature development. 
It's still unused code and a maintenance burden until there's a user API, and 
we don't have a user API yet.

We've gone over feature branch overhead multiple times before. With the number 
of committers involved already, I have high confidence we can produce 3 +1's 
for a branch merge. Given that the code is mostly separate, integrating trunk 
changes won't be a big overhead. I don't follow Xiaobing's concern about 
tripartite merging, since comparing different API proposals doesn't necessitate 
a branch per proposal.

Do we need a phone call, say on Monday? We're going in circles with some of 
these arguments, and I'd like to settle this as much as everyone else. I can do 
before noon or 4PM or after.

> [umbrella] Nonblocking HDFS Access
> --
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: Async-HDFS-Performance-Report.pdf, AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Nonblocking HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support nonblocking calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10541) When no actions in plan, error message says "Plan was generated more than 24 hours ago"

2016-06-17 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10541:

Attachment: HDFS-10541-HDFS-1312.001.patch

[~eddyxu] Thanks for filing this issue. It is a confusing message. However the 
message is _technically_ correct, that is timestamp is 0 in the plan file, 
which is it is not a valid time stamp. I think the issue is not in the execute 
phase, but rather in the plan phase. We should not be generating these empty 
plan files with invalid timestamps in the first place. The attached patch skips 
generating a plan if we don't have valid steps.

Please let me know if this addresses your concern.

> When no actions in plan, error message says "Plan was generated more than 24 
> hours ago"
> ---
>
> Key: HDFS-10541
> URL: https://issues.apache.org/jira/browse/HDFS-10541
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Minor
> Attachments: HDFS-10541-HDFS-1312.001.patch
>
>
> The message is misleading. Instead, it should make it clear that there are no 
> steps (or no action) to take in this plan - and should probably not error out.
> {code}
> 16/06/16 14:56:53 INFO command.Command: Executing "execute plan" command
> Plan was generated more than 24 hours ago.
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.verifyTimeStamp(DiskBalancer.java:387)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.verifyPlan(DiskBalancer.java:315)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.submitPlan(DiskBalancer.java:173)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.submitDiskBalancerPlan(DataNode.java:3059)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.submitDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:299)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17509)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}
> This happens when a plan looks like the following one:
> {code}
> {"volumeSetPlans":[],"nodeName":"a.b.c","nodeUUID":null,"port":20001,"timeStamp":0}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10541) When no actions in plan, error message says "Plan was generated more than 24 hours ago"

2016-06-17 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10541:

Status: Patch Available  (was: Open)

> When no actions in plan, error message says "Plan was generated more than 24 
> hours ago"
> ---
>
> Key: HDFS-10541
> URL: https://issues.apache.org/jira/browse/HDFS-10541
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Minor
> Attachments: HDFS-10541-HDFS-1312.001.patch
>
>
> The message is misleading. Instead, it should make it clear that there are no 
> steps (or no action) to take in this plan - and should probably not error out.
> {code}
> 16/06/16 14:56:53 INFO command.Command: Executing "execute plan" command
> Plan was generated more than 24 hours ago.
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.verifyTimeStamp(DiskBalancer.java:387)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.verifyPlan(DiskBalancer.java:315)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DiskBalancer.submitPlan(DiskBalancer.java:173)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.submitDiskBalancerPlan(DataNode.java:3059)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.submitDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:299)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17509)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}
> This happens when a plan looks like the following one:
> {code}
> {"volumeSetPlans":[],"nodeName":"a.b.c","nodeUUID":null,"port":20001,"timeStamp":0}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >