[jira] [Updated] (HDFS-10568) Reuse ObjectMapper instance in CombinedHostsFileReader and CombinedHostsFileWriter

2016-06-27 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10568:
-
Attachment: HDFS-10568.002.patch

Thanks [~ajisakaa] for review. Post the new patch for addressing the comments, 
pending jenkins.

> Reuse ObjectMapper instance in CombinedHostsFileReader and 
> CombinedHostsFileWriter
> --
>
> Key: HDFS-10568
> URL: https://issues.apache.org/jira/browse/HDFS-10568
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10568.001.patch, HDFS-10568.002.patch
>
>
> The {{ObjectMapper}} instance is not reused in class 
> {{CombinedHostsFileReader}} and {{CombinedHostsFileWriter}}. We can reuse 
> them to improve performance.
> Here are related issues: HDFS-9724, HDFS-9768.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10583) Add Utilities/conf links to HDFS UI

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned HDFS-10583:
--

Assignee: Weiwei Yang

> Add Utilities/conf links to HDFS UI
> ---
>
> Key: HDFS-10583
> URL: https://issues.apache.org/jira/browse/HDFS-10583
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> When admin wants to explore some configuration properties, such as namenode 
> and datanode, it will be helpful to provide an UI page to read them. This is 
> extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10440) Improve DataNode web UI

2016-06-27 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352374#comment-15352374
 ] 

Weiwei Yang edited comment on HDFS-10440 at 6/28/16 5:29 AM:
-

Uploaded v8 patch to address following updates
* Keep {{DataNode#getNamenodeAddresses()}} unmodified for compatibility
* Modify {{BPServiceActor#getNameNodeAddress()}} to private
* Remove Utilities/conf link, a separated JIRA HDFS-10583 is created to address 
this


was (Author: cheersyang):
Uploaded v8 patch to address following updates
* Keep {{DataNode#getNamenodeAddresses()}} unmodified for compatibility
* Modify {{BPServiceActor#getNameNodeAddress()}} to private
* Remove Utilities/conf link, I will create a separate JIRA to address that for 
HDFS web pages

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> HDFS-10440.006.patch, HDFS-10440.007.patch, HDFS-10440.008.patch, 
> datanode_2nns.html.002.jpg, datanode_html.001.jpg, 
> datanode_loading_err.002.jpg, datanode_utilities.001.jpg, 
> datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, 
> nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10583) Add Utilities/conf links to HDFS UI

2016-06-27 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-10583:
--

 Summary: Add Utilities/conf links to HDFS UI
 Key: HDFS-10583
 URL: https://issues.apache.org/jira/browse/HDFS-10583
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs, ui
Reporter: Weiwei Yang


When admin wants to explore some configuration properties, such as namenode and 
datanode, it will be helpful to provide an UI page to read them. This is 
extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10440:
---
Attachment: HDFS-10440.008.patch

Uploaded v8 patch to address following updates
* Keep {{DataNode#getNamenodeAddresses()}} unmodified for compatibility
* Modify {{BPServiceActor#getNameNodeAddress()}} to private
* Remove Utilities/conf link, I will create a separate JIRA to address that for 
HDFS web pages

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> HDFS-10440.006.patch, HDFS-10440.007.patch, HDFS-10440.008.patch, 
> datanode_2nns.html.002.jpg, datanode_html.001.jpg, 
> datanode_loading_err.002.jpg, datanode_utilities.001.jpg, 
> datanode_utilities.002.jpg, dn_web_ui.003.jpg, dn_web_ui_mockup.jpg, 
> nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-06-27 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352362#comment-15352362
 ] 

Weiwei Yang commented on HDFS-10440:


Thanks a lot, let me upload a new patch to address these. Also for the tip 
about Jenkins, it's smarter than I thought :).

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> HDFS-10440.006.patch, HDFS-10440.007.patch, datanode_2nns.html.002.jpg, 
> datanode_html.001.jpg, datanode_loading_err.002.jpg, 
> datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, 
> dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-06-27 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352354#comment-15352354
 ] 

Vinayakumar B commented on HDFS-10440:
--

1. I didn't mean to change existing {{DataNode#getNamenodeAddresses()}} output. 
It breaks the compatibility. So let it be just hostname. After this, 
{{BPSA#getNameNodeAddress()}} could be private.
{code} for (BPServiceActor actor : bpos.getBPServiceActors()) {
-  info.put(actor.getNNSocketAddress().getHostName(),
+  info.put(actor.getNameNodeAddress(),
   bpos.getBlockPoolId());{code}

2. I am not very sure about giving link to {{/conf}} is good idea here since 
other pages dont have this yet. May be this could be taken as a separate 
improvement, together for all HTML pages. 
{code}Configuration{code}

Other changes are looks good. Thanks for updates.
+1, once these are addressed.

FYI, You dont need to change the Jira status to "In-Progress" and back to 
"Patch Available" every time.
Jenkins will pick up directly once the latest patch file is updated.



> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> HDFS-10440.006.patch, HDFS-10440.007.patch, datanode_2nns.html.002.jpg, 
> datanode_html.001.jpg, datanode_loading_err.002.jpg, 
> datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, 
> dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10440:
---
Target Version/s:   (was: 2.8.0, 2.9.0, 3.0.0-alpha1)
  Status: Patch Available  (was: In Progress)

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> HDFS-10440.006.patch, HDFS-10440.007.patch, datanode_2nns.html.002.jpg, 
> datanode_html.001.jpg, datanode_loading_err.002.jpg, 
> datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, 
> dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10440:
---
Status: In Progress  (was: Patch Available)

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> HDFS-10440.006.patch, HDFS-10440.007.patch, datanode_2nns.html.002.jpg, 
> datanode_html.001.jpg, datanode_loading_err.002.jpg, 
> datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, 
> dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-06-27 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352297#comment-15352297
 ] 

Weiwei Yang commented on HDFS-10440:


Hello [~vinayrpet]

Can you please help to review v7 patch? Appreciate your help.

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Affects Versions: 2.7.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-10440.001.patch, HDFS-10440.002.patch, 
> HDFS-10440.003.patch, HDFS-10440.004.patch, HDFS-10440.005.patch, 
> HDFS-10440.006.patch, HDFS-10440.007.patch, datanode_2nns.html.002.jpg, 
> datanode_html.001.jpg, datanode_loading_err.002.jpg, 
> datanode_utilities.001.jpg, datanode_utilities.002.jpg, dn_web_ui.003.jpg, 
> dn_web_ui_mockup.jpg, nn_dfs_storage_types.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Block pools info (BP IDs, namenode address, actor states)
> * Storage info (Volumes, capacity used, reserved, left)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-06-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352295#comment-15352295
 ] 

Hadoop QA commented on HDFS-10581:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814012/HDFS-10581.002.patch |
| JIRA Issue | HDFS-10581 |
| Optional Tests |  asflicense  |
| uname | Linux aa0405f07ddb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7d20704 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15926/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, after.jpg, 
> before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352290#comment-15352290
 ] 

Weiwei Yang commented on HDFS-10581:


Uploaded v2 patch according to [~rushabh.shah]'s comment.

> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, after.jpg, 
> before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10581:
---
Status: Patch Available  (was: In Progress)

> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, after.jpg, 
> before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10581:
---
Attachment: HDFS-10581.002.patch

> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, after.jpg, 
> before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10575) webhdfs fails with filenames including semicolons

2016-06-27 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352226#comment-15352226
 ] 

Yuanbo Liu commented on HDFS-10575:
---

This is a duplicate jira of HDFS-10574

> webhdfs fails with filenames including semicolons
> -
>
> Key: HDFS-10575
> URL: https://issues.apache.org/jira/browse/HDFS-10575
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.0
>Reporter: Bob Hansen
> Attachments: curl_request.txt, dfs_copyfrom_local_traffic.txt
>
>
> Via webhdfs or native HDFS, we can create files with semicolons in their 
> names:
> {code}
> bhansen@::1 /tmp$ hdfs dfs -copyFromLocal /tmp/data 
> "webhdfs://localhost:50070/foo;bar"
> bhansen@::1 /tmp$ hadoop fs -ls /
> Found 1 items
> -rw-r--r--   2 bhansen supergroup  9 2016-06-24 12:20 /foo;bar
> {code}
> Attempting to fetch the file via webhdfs fails:
> {code}
> bhansen@::1 /tmp$ curl -L 
> "http://localhost:50070/webhdfs/v1/foo%3Bbar?user.name=bhansen=OPEN;
> {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  does not exist: /foo\n\tat 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)\n\tat
>  
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)\n\tat
>  
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
>  
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\n\tat
>  org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\n\tat 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\n\tat 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\n\tat 
> java.security.AccessController.doPrivileged(Native Method)\n\tat 
> javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\n\tat
>  org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\n"}}
> {code}
> It appears (from the attached TCP dump in curl_request.txt) that the 
> namenode's redirect unescapes the semicolon, and the DataNode's HTTP server 
> is splitting the request at the semicolon, and failing to find the file "foo".
> Interesting side notes:
> * In the attached dfs_copyfrom_local_traffic.txt, you can see the 
> copyFromLocal command writing the data to "foo;bar_COPYING_", which is then 
> redirected and just writes to "foo".  The subsequent rename attempts to 
> rename "foo;bar_COPYING_" to "foo;bar", but has the same parsing bug so 
> effectively renames "foo" to "foo;bar".
> Here is the full range of special characters that we initially started with 
> that led to the minimal reproducer above:
> {code}
> hdfs dfs -copyFromLocal /tmp/data webhdfs://localhost:50070/'~`!@#$%^& 
> ()-_=+|<.>]}",\\\[\{\*\?\;'\''data'
> curl -L 
> "http://localhost:50070/webhdfs/v1/%7E%60%21%40%23%24%25%5E%26+%28%29-_%3D%2B%7C%3C.%3E%5D%7D%22%2C%5C%5B%7B*%3F%3B%27data?user.name=bhansen=OPEN=0;
> {code}
> Thanks to [~anatoli.shein] for making a concise reproducer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10582) Change deprecated configuration fs.checkpoint.dir to dfs.namenode.checkpoint.dir in HDFS Commands Doc

2016-06-27 Thread Pan Yuxuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pan Yuxuan updated HDFS-10582:
--
Description: HDFS Commands Documentation -importCheckpoint uses the 
deprecated configuration string {code}fs.checkpoint.dir{code} we can use 
{noformat}dfs.namenode.checkpoint.dir{noformat} instead.  (was: HDFS Commands 
Documentation -importCheckpoint uses the deprecated configuration string 
{noformat}fs.checkpoint.dir{noformat}, we can use 
{noformat}dfs.namenode.checkpoint.dir{noformat} instead.)

> Change deprecated configuration fs.checkpoint.dir to 
> dfs.namenode.checkpoint.dir in HDFS Commands Doc
> -
>
> Key: HDFS-10582
> URL: https://issues.apache.org/jira/browse/HDFS-10582
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Priority: Minor
> Attachments: HDFS-10582.patch
>
>
> HDFS Commands Documentation -importCheckpoint uses the deprecated 
> configuration string {code}fs.checkpoint.dir{code} we can use 
> {noformat}dfs.namenode.checkpoint.dir{noformat} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10582) Change deprecated configuration fs.checkpoint.dir to dfs.namenode.checkpoint.dir in HDFS Commands Doc

2016-06-27 Thread Pan Yuxuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pan Yuxuan updated HDFS-10582:
--
Attachment: HDFS-10582.patch

> Change deprecated configuration fs.checkpoint.dir to 
> dfs.namenode.checkpoint.dir in HDFS Commands Doc
> -
>
> Key: HDFS-10582
> URL: https://issues.apache.org/jira/browse/HDFS-10582
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Pan Yuxuan
>Priority: Minor
> Attachments: HDFS-10582.patch
>
>
> HDFS Commands Documentation -importCheckpoint uses the deprecated 
> configuration string {noformat}fs.checkpoint.dir{noformat}, we can use 
> {noformat}dfs.namenode.checkpoint.dir{noformat} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10582) Change deprecated configuration fs.checkpoint.dir to dfs.namenode.checkpoint.dir in HDFS Commands Doc

2016-06-27 Thread Pan Yuxuan (JIRA)
Pan Yuxuan created HDFS-10582:
-

 Summary: Change deprecated configuration fs.checkpoint.dir to 
dfs.namenode.checkpoint.dir in HDFS Commands Doc
 Key: HDFS-10582
 URL: https://issues.apache.org/jira/browse/HDFS-10582
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.7.2
Reporter: Pan Yuxuan
Priority: Minor


HDFS Commands Documentation -importCheckpoint uses the deprecated configuration 
string {noformat}fs.checkpoint.dir{noformat}, we can use 
{noformat}dfs.namenode.checkpoint.dir{noformat} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10396) Using -diff option with DistCp may get "Comparison method violates its general contract" exception

2016-06-27 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352202#comment-15352202
 ] 

Jing Zhao commented on HDFS-10396:
--

+1. Thanks for the fix, Yongjun!

> Using -diff option with DistCp may get "Comparison method violates its 
> general contract" exception
> --
>
> Key: HDFS-10396
> URL: https://issues.apache.org/jira/browse/HDFS-10396
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-10396.001.patch
>
>
> Using -diff option get the following exception due to a bug in the comparison 
> operator:
> {code}
> 16/04/21 14:34:18 WARN tools.DistCp: Failed to use snapshot diff for distcp
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>   at java.util.TimSort.mergeHi(TimSort.java:868)
>   at java.util.TimSort.mergeAt(TimSort.java:485)
>   at java.util.TimSort.mergeForceCollapse(TimSort.java:426)
>   at java.util.TimSort.sort(TimSort.java:223)
>   at java.util.TimSort.sort(TimSort.java:173)
>   at java.util.Arrays.sort(Arrays.java:659)
>   at org.apache.hadoop.tools.DistCpSync.moveToTarget(DistCpSync.java:293)
>   at org.apache.hadoop.tools.DistCpSync.syncDiff(DistCpSync.java:261)
>   at org.apache.hadoop.tools.DistCpSync.sync(DistCpSync.java:131)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:163)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> 16/04/21 14:34:18 ERROR tools.DistCp: Exception encountered 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10568) Reuse ObjectMapper instance in CombinedHostsFileReader and CombinedHostsFileWriter

2016-06-27 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352161#comment-15352161
 ] 

Akira Ajisaka commented on HDFS-10568:
--

Thanks [~linyiqun] for reporting this and creating the patch.
{code}
  READER.readValues(new JsonFactory().createJsonParser(input));
{code}
Can we reuse JsonFactory as well?

> Reuse ObjectMapper instance in CombinedHostsFileReader and 
> CombinedHostsFileWriter
> --
>
> Key: HDFS-10568
> URL: https://issues.apache.org/jira/browse/HDFS-10568
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10568.001.patch
>
>
> The {{ObjectMapper}} instance is not reused in class 
> {{CombinedHostsFileReader}} and {{CombinedHostsFileWriter}}. We can reuse 
> them to improve performance.
> Here are related issues: HDFS-9724, HDFS-9768.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10396) Using -diff option with DistCp may get "Comparison method violates its general contract" exception

2016-06-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352134#comment-15352134
 ] 

Hadoop QA commented on HDFS-10396:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
15s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12813989/HDFS-10396.001.patch |
| JIRA Issue | HDFS-10396 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 672c03d96146 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9683eab |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15925/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15925/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Using -diff option with DistCp may get "Comparison method violates its 
> general contract" exception
> --
>
> Key: HDFS-10396
> URL: https://issues.apache.org/jira/browse/HDFS-10396
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-10396.001.patch
>
>
> Using -diff option get 

[jira] [Commented] (HDFS-10544) Balancer doesn't work with IPFailoverProxyProvider

2016-06-27 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352117#comment-15352117
 ] 

Konstantin Shvachko commented on HDFS-10544:


Looking at the test case you added to {{TestDFSUtil.testGetNNUris()}}. Here is 
the essential part of the configuration:
{code}
fs.defaultFS=hdfs://nn2.example.com:9820, 
dfs.namenode.rpc-address=hdfs://nn.example.com:9820,
dfs.nameservices=ns1,ns2,
dfs.namenode.servicerpc-address.ns2=ns2-nn.example.com:9820,
dfs.ha.namenodes.ns1=nn1,nn2,
dfs.namenode.rpc-address.ns1.nn1=ns1-nn1.example.com:9820, 
dfs.namenode.rpc-address.ns1.nn2=ns1-nn2.example.com:9820, 
dfs.client.failover.proxy.provider.ns1=org.apache.hadoop.hdfs.server.namenode.ha.IPFailoverProxyProvider,
{code}
And {{DFSUtil.getInternalNsRpcUris()}} returns {{uris = 
\[hdfs://ns2-nn.example.com:9820, hdfs://nn.example.com:9820\]}}. Not sure if 
this is what you expected.

Also in the test we know exactly what those URIs returned by 
{{getInternalNsRpcUris()}} should be, so it would make sense to add asserts for 
the values in addition to checking the number of the URIs.

> Balancer doesn't work with IPFailoverProxyProvider
> --
>
> Key: HDFS-10544
> URL: https://issues.apache.org/jira/browse/HDFS-10544
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-10544.00.patch, HDFS-10544.01.patch, 
> HDFS-10544.02.patch, HDFS-10544.03.patch
>
>
> Right now {{Balancer}} gets the NN URIs through 
> {{DFSUtil#getNameServiceUris}}, which returns logical URIs in HA is enabled. 
> If {{IPFailoverProxyProvider}} is used, {{Balancer}} will not be able to 
> start.
> I think the bug is at {{DFSUtil#getNameServiceUris}}:
> {code}
> for (String nsId : getNameServiceIds(conf)) {
>   if (HAUtil.isHAEnabled(conf, nsId)) {
> // Add the logical URI of the nameservice.
> try {
>   ret.add(new URI(HdfsConstants.HDFS_URI_SCHEME + "://" + nsId));
> {code}
> Then {{if}} clause should also consider if the {{FailoverProxyProvider}} has 
> {{useLogicalURI}} enabled. If not, {{getNameServiceUris}} should try to 
> resolve the physical URI for this nsId.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-06-27 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352115#comment-15352115
 ] 

John Zhuge commented on HDFS-6962:
--

To support webhdfs, the REST api for {{CREATE}} and {{MKDIRS}} must be extended 
with a new parameter for unmasked permission. Let us add the support in a 
separate jira if deemed necessary. For now, the 2 new unit tests are disabled 
for {{TestWebHDFSAcl}}.

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.1.patch
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10396) Using -diff option with DistCp may get "Comparison method violates its general contract" exception

2016-06-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352097#comment-15352097
 ] 

Yongjun Zhang commented on HDFS-10396:
--

Hi [~jingzhao],

Would you please help taking a look at this quick/small fix? I intended to 
write a test but we may save the time since this fix is quite obvious.

Thanks a lot!


> Using -diff option with DistCp may get "Comparison method violates its 
> general contract" exception
> --
>
> Key: HDFS-10396
> URL: https://issues.apache.org/jira/browse/HDFS-10396
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-10396.001.patch
>
>
> Using -diff option get the following exception due to a bug in the comparison 
> operator:
> {code}
> 16/04/21 14:34:18 WARN tools.DistCp: Failed to use snapshot diff for distcp
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>   at java.util.TimSort.mergeHi(TimSort.java:868)
>   at java.util.TimSort.mergeAt(TimSort.java:485)
>   at java.util.TimSort.mergeForceCollapse(TimSort.java:426)
>   at java.util.TimSort.sort(TimSort.java:223)
>   at java.util.TimSort.sort(TimSort.java:173)
>   at java.util.Arrays.sort(Arrays.java:659)
>   at org.apache.hadoop.tools.DistCpSync.moveToTarget(DistCpSync.java:293)
>   at org.apache.hadoop.tools.DistCpSync.syncDiff(DistCpSync.java:261)
>   at org.apache.hadoop.tools.DistCpSync.sync(DistCpSync.java:131)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:163)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> 16/04/21 14:34:18 ERROR tools.DistCp: Exception encountered 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10396) Using -diff option with DistCp may get "Comparison method violates its general contract" exception

2016-06-27 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10396:
-
Status: Patch Available  (was: Open)

> Using -diff option with DistCp may get "Comparison method violates its 
> general contract" exception
> --
>
> Key: HDFS-10396
> URL: https://issues.apache.org/jira/browse/HDFS-10396
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-10396.001.patch
>
>
> Using -diff option get the following exception due to a bug in the comparison 
> operator:
> {code}
> 16/04/21 14:34:18 WARN tools.DistCp: Failed to use snapshot diff for distcp
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>   at java.util.TimSort.mergeHi(TimSort.java:868)
>   at java.util.TimSort.mergeAt(TimSort.java:485)
>   at java.util.TimSort.mergeForceCollapse(TimSort.java:426)
>   at java.util.TimSort.sort(TimSort.java:223)
>   at java.util.TimSort.sort(TimSort.java:173)
>   at java.util.Arrays.sort(Arrays.java:659)
>   at org.apache.hadoop.tools.DistCpSync.moveToTarget(DistCpSync.java:293)
>   at org.apache.hadoop.tools.DistCpSync.syncDiff(DistCpSync.java:261)
>   at org.apache.hadoop.tools.DistCpSync.sync(DistCpSync.java:131)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:163)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> 16/04/21 14:34:18 ERROR tools.DistCp: Exception encountered 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10396) Using -diff option with DistCp may get "Comparison method violates its general contract" exception

2016-06-27 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10396:
-
Attachment: HDFS-10396.001.patch

> Using -diff option with DistCp may get "Comparison method violates its 
> general contract" exception
> --
>
> Key: HDFS-10396
> URL: https://issues.apache.org/jira/browse/HDFS-10396
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-10396.001.patch
>
>
> Using -diff option get the following exception due to a bug in the comparison 
> operator:
> {code}
> 16/04/21 14:34:18 WARN tools.DistCp: Failed to use snapshot diff for distcp
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>   at java.util.TimSort.mergeHi(TimSort.java:868)
>   at java.util.TimSort.mergeAt(TimSort.java:485)
>   at java.util.TimSort.mergeForceCollapse(TimSort.java:426)
>   at java.util.TimSort.sort(TimSort.java:223)
>   at java.util.TimSort.sort(TimSort.java:173)
>   at java.util.Arrays.sort(Arrays.java:659)
>   at org.apache.hadoop.tools.DistCpSync.moveToTarget(DistCpSync.java:293)
>   at org.apache.hadoop.tools.DistCpSync.syncDiff(DistCpSync.java:261)
>   at org.apache.hadoop.tools.DistCpSync.sync(DistCpSync.java:131)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:163)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> 16/04/21 14:34:18 ERROR tools.DistCp: Exception encountered 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile

2016-06-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352001#comment-15352001
 ] 

Andrew Wang commented on HDFS-10534:


bq. This is also a metric that could be calculated in client-side JS from 
existing information.

To clarify this a bit more, the NN webUI is mostly built in JS right now. Is it 
possible to calculate the histogram in JS as well? Then we don't need to add 
this new metric.

I imagine that other monitoring tools consuming JMX are already doing similar 
things for their alerting thresholds, since the raw data is already available. 
The metric added in this patch doesn't expose more information the same way our 
RPC latency histograms do, which limits its utility. Monitoring tools probably 
already have business logic for defining these alerting thresholds, and doing 
it there is more expressive too.

> NameNode WebUI should display DataNode usage rate with a certain percentile
> ---
>
> Key: HDFS-10534
> URL: https://issues.apache.org/jira/browse/HDFS-10534
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Zhe Zhang
>Assignee: Kai Sasaki
> Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, 
> HDFS-10534.03.patch, HDFS-10534.04.patch, HDFS-10534.05.patch, Screen Shot 
> 2016-06-23 at 6.25.50 AM.png
>
>
> In addition of *Min/Median/Max*, another meaningful metric for cluster 
> balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should 
> add a config option, and another filed on NN WebUI, to display this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10580) DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info

2016-06-27 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351968#comment-15351968
 ] 

Anu Engineer commented on HDFS-10580:
-

[~linyiqun] Thanks for the patch. The original reason that we added print queue 
was to debug how the plan is going. But I personally found it very noisy, so 
after most of the development was over I removed it from all code path. It is 
not that I am against it, for I personally did benefit using the printQueue.  
Can you run some tests with larger number of disks (stuff under 
TestPlanner.java) and see for yourself if it is too noisy. 

Otherwise the patch looks good, and as I said it is really useful for debug 
purposes but I would like to you to be the judge of how signal vs. noise for 
future developers. 
I have too much context to decide if this has any useful info for other 
developers.

So run some tests and look at the logs of those tests, and then let me know 
what you think.

>From a technical / code point of view, the patch looks good to me.


> DiskBalancer : Make use of unused methods in GreedyPlanner to print debug info
> --
>
> Key: HDFS-10580
> URL: https://issues.apache.org/jira/browse/HDFS-10580
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10580.001.patch
>
>
> There are two unused method {{skipVolume}} and {{printQueue}} in class 
> {{GreedyPlanner}}. These two methods were added in HDFS-9469 but they are not 
> used. In these two method, it will print the detail debug info. So We can 
> make use of them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9805) TCP_NODELAY not set before SASL handshake in data transfer pipeline

2016-06-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351934#comment-15351934
 ] 

Hadoop QA commented on HDFS-9805:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
679 unchanged - 2 fixed = 682 total (was 681) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12813826/HDFS-9805.005.patch |
| JIRA Issue | HDFS-9805 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux e8bb3f61e9df 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9683eab |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Updated] (HDFS-9805) TCP_NODELAY not set before SASL handshake in data transfer pipeline

2016-06-27 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HDFS-9805:

Attachment: HDFS-9805.005.patch

Another update to test and checkstyle issues:

* adds dfs.data.transfer.server.tcpnodelay to hdfs-default.xml to fix 
TestHdfsConfigFields
* fixes checkstyle line length issues

Of the other reported test failures:
* TestOpenFilesWithSnapshot, TestRollingFileSystemSinkWithHdfs both pass for me 
locally
* TestOfflineEditsViewer seems to be already failing on trunk

> TCP_NODELAY not set before SASL handshake in data transfer pipeline
> ---
>
> Key: HDFS-9805
> URL: https://issues.apache.org/jira/browse/HDFS-9805
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Attachments: HDFS-9805.002.patch, HDFS-9805.003.patch, 
> HDFS-9805.004.patch, HDFS-9805.005.patch
>
>
> There are a few places in the DN -> DN block transfer pipeline where 
> TCP_NODELAY is not set before doing a SASL handshake:
> * in {{DataNode.DataTransfer::run()}}
> * in {{DataXceiver::replaceBlock()}}
> * in {{DataXceiver::writeBlock()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8897) Loadbalancer always exits with : java.io.IOException: Another Balancer is running.. Exiting ...

2016-06-27 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351649#comment-15351649
 ] 

Biju Nair commented on HDFS-8897:
-

Is there a way to submit a patch? We had the same issue on our clusters.

> Loadbalancer always exits with : java.io.IOException: Another Balancer is 
> running..  Exiting ...
> 
>
> Key: HDFS-8897
> URL: https://issues.apache.org/jira/browse/HDFS-8897
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.7.1
> Environment: Centos 6.6
>Reporter: LINTE
>
> When balancer is launched, it should test if there is already a 
> /system/balancer.id file in HDFS.
> When the file doesn't exist, the balancer don't want to run : 
> 15/08/14 16:35:12 INFO balancer.Balancer: namenodes  = [hdfs://sandbox/, 
> hdfs://sandbox]
> 15/08/14 16:35:12 INFO balancer.Balancer: parameters = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=10.0, max idle iteration 
> = 5, number of nodes to be excluded = 0, number of nodes to be included = 0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> 15/08/14 16:35:14 INFO balancer.KeyManager: Block token params received from 
> NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Update block keys every 2hrs, 
> 30mins, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Block token params received from 
> NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Update block keys every 2hrs, 
> 30mins, 0sec
> java.io.IOException: Another Balancer is running..  Exiting ...
> Aug 14, 2015 4:35:14 PM  Balancing took 2.408 seconds
> Looking at the audit log file when trying to run the balancer, the balancer 
> create the /system/balancer.id and then delete it on exiting ... 
> 2015-08-14 16:37:45,844 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:45,900 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=create  
> src=/system/balancer.id dst=nullperm=hdfs:hadoop:rw-r-  
> proto=rpc
> 2015-08-14 16:37:45,919 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,090 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,112 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,117 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=delete  
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> The error seems to be located in 
> org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java 
> The function checkAndMarkRunning return null even if the /system/balancer.id 
> doesn't exist before entering this function; if it exists, then it is deleted 
> and the balancer exit with the same error.
> 
>   private OutputStream checkAndMarkRunning() throws IOException {
> try {
>   if (fs.exists(idPath)) {
> // try appending to it so that it will fail fast if another balancer 
> is
> // running.
> IOUtils.closeStream(fs.append(idPath));
> fs.delete(idPath, true);
>   }
>   final FSDataOutputStream fsout = fs.create(idPath, false);
>   // mark balancer idPath to be deleted during filesystem closure
>   fs.deleteOnExit(idPath);
>   if (write2IdFile) {
> fsout.writeBytes(InetAddress.getLocalHost().getHostName());
> fsout.hflush();
>   }
>   return fsout;
> } catch(RemoteException e) {
>   
> if(AlreadyBeingCreatedException.class.getName().equals(e.getClassName())){
> return null;
>   } else {
> throw e;
>   }
> }
>   }
> 
> Regards



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HDFS-9805) TCP_NODELAY not set before SASL handshake in data transfer pipeline

2016-06-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351641#comment-15351641
 ] 

Hadoop QA commented on HDFS-9805:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 27 new 
+ 679 unchanged - 2 fixed = 706 total (was 681) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12813805/HDFS-9805.004.patch |
| JIRA Issue | HDFS-9805 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux df7ac815c41f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9683eab |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15922/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 

[jira] [Commented] (HDFS-10396) Using -diff option with DistCp may get "Comparison method violates its general contract" exception

2016-06-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351624#comment-15351624
 ] 

Yongjun Zhang commented on HDFS-10396:
--

Sorry [~rezaf_2000], I was on vacation for some time. Will post a patch soon.


> Using -diff option with DistCp may get "Comparison method violates its 
> general contract" exception
> --
>
> Key: HDFS-10396
> URL: https://issues.apache.org/jira/browse/HDFS-10396
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>
> Using -diff option get the following exception due to a bug in the comparison 
> operator:
> {code}
> 16/04/21 14:34:18 WARN tools.DistCp: Failed to use snapshot diff for distcp
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>   at java.util.TimSort.mergeHi(TimSort.java:868)
>   at java.util.TimSort.mergeAt(TimSort.java:485)
>   at java.util.TimSort.mergeForceCollapse(TimSort.java:426)
>   at java.util.TimSort.sort(TimSort.java:223)
>   at java.util.TimSort.sort(TimSort.java:173)
>   at java.util.Arrays.sort(Arrays.java:659)
>   at org.apache.hadoop.tools.DistCpSync.moveToTarget(DistCpSync.java:293)
>   at org.apache.hadoop.tools.DistCpSync.syncDiff(DistCpSync.java:261)
>   at org.apache.hadoop.tools.DistCpSync.sync(DistCpSync.java:131)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:163)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
> 16/04/21 14:34:18 ERROR tools.DistCp: Exception encountered 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10441) libhdfs++: HA namenode support

2016-06-27 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-10441:
---
Attachment: HDFS-10441.HDFS-8707.007.patch

Some minor improvements:
-HANamenode tracker now a class, added accessors so no more direct access to 
bool flags
-More comments
-NamenodeOperations::Connect now takes resolved endpoints vector by 
value(RpcEngine does as well).  It can get called in asynchronous contexts so 
it's worth the small, and rare, copy overhead to a avoid dangling ref/pointer 
issues.

> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch, 
> HDFS-10441.HDFS-8707.002.patch, HDFS-10441.HDFS-8707.003.patch, 
> HDFS-10441.HDFS-8707.004.patch, HDFS-10441.HDFS-8707.005.patch, 
> HDFS-10441.HDFS-8707.006.patch, HDFS-10441.HDFS-8707.007.patch, 
> HDFS-8707.HDFS-10441.001.patch
>
>
> If a cluster is HA enabled then do proper failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10567) Improve plan command help message

2016-06-27 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351496#comment-15351496
 ] 

Xiaobing Zhou commented on HDFS-10567:
--

The test failures are not related. It passed locally(latest trunk) except 
TestOfflineEditsViewer.testGenerated, which is reported in HDFS-10572.


> Improve plan command help message
> -
>
> Key: HDFS-10567
> URL: https://issues.apache.org/jira/browse/HDFS-10567
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Lei (Eddy) Xu
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10567-HDFS-1312.000.patch
>
>
> {code}
> --bandwidth  Maximum disk bandwidth to be consumed by
>   diskBalancer. e.g. 10
> --maxerror   Describes how many errors can be
>   tolerated while copying between a pair
>   of disks.
> --outFile to write output to, if not
>   specified defaults will be used.
> --plan   creates a plan for datanode.
> --thresholdPercentagePercentage skew that wetolerate before
>   diskbalancer starts working e.g. 10
> --v   Print out the summary of the plan on
>   console
> {code}
> We should 
> * Put the unit into {{--bandwidth}}, or its help message. Is it an integer or 
> float / double number? Not clear in CLI message.
> * Give more details about {{--plan}}. It is not clear what the {{}} is 
> for.
> * {{--thresholdPercentage}},  has typo {{wetolerate}} in the error message. 
> Also it needs to indicated that it is the difference between space 
> utilization between two disks / volumes.  Is it an integer or float / double 
> number?
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9805) TCP_NODELAY not set before SASL handshake in data transfer pipeline

2016-06-27 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HDFS-9805:

Attachment: HDFS-9805.004.patch

Updated patch adding:

* a new configuration property (dfs.data.transfer.server.tcpnodelay), 
defaulting to true, controlling the TCP_NODELAY setting in the DN -> DN 
transfer path
* a test case check the TCP_NODELAY was enabled on all used sockets when the 
relevant config settings are enabled

Note that I had to modify {{DataNode#newSocket()}} in this patch in order to 
support the test case.  Prior to this, {{newSocket()}} was not using the 
configured socket factory, instead creating sockets directly.  This seems like 
a change we would want anyway, just calling in out.

> TCP_NODELAY not set before SASL handshake in data transfer pipeline
> ---
>
> Key: HDFS-9805
> URL: https://issues.apache.org/jira/browse/HDFS-9805
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Attachments: HDFS-9805.002.patch, HDFS-9805.003.patch, 
> HDFS-9805.004.patch
>
>
> There are a few places in the DN -> DN block transfer pipeline where 
> TCP_NODELAY is not set before doing a SASL handshake:
> * in {{DataNode.DataTransfer::run()}}
> * in {{DataXceiver::replaceBlock()}}
> * in {{DataXceiver::writeBlock()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10581:
---
Status: In Progress  (was: Patch Available)

> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351275#comment-15351275
 ] 

Weiwei Yang commented on HDFS-10581:


Hello [~rushabh.shah]

Yeah, we can remove the header in this case also, that will give a more nice 
and clean page. I'll revise the patch to do so. Thanks for your comment.

> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-06-27 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351255#comment-15351255
 ] 

Rushabh S Shah commented on HDFS-10581:
---

In the after case, do we even need the Decommissioning header also ?

> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10576) DiskBalancer followup work items

2016-06-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10576:
-
Description: This is a master JIRA for followup work items for disk 
balancer.  (was: This is a master JIRA for future work items for disk balancer.)

> DiskBalancer followup work items
> 
>
> Key: HDFS-10576
> URL: https://issues.apache.org/jira/browse/HDFS-10576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.9.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: 2.9.0
>
>
> This is a master JIRA for followup work items for disk balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10576) DiskBalancer followup work items

2016-06-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10576:
-
Summary: DiskBalancer followup work items  (was: DiskBalancer future work 
items)

> DiskBalancer followup work items
> 
>
> Key: HDFS-10576
> URL: https://issues.apache.org/jira/browse/HDFS-10576
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.9.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: 2.9.0
>
>
> This is a master JIRA for future work items for disk balancer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8738) Limit Exceptions thrown by DataNode when a client makes socket connection and sends an empty message

2016-06-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351235#comment-15351235
 ] 

Arpit Agarwal commented on HDFS-8738:
-

Looks like a duplicate of HDFS-9572 which fixed this problem.

> Limit Exceptions thrown by DataNode when a client makes socket connection and 
> sends an empty message
> 
>
> Key: HDFS-8738
> URL: https://issues.apache.org/jira/browse/HDFS-8738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Rajesh Kartha
>Assignee: Rajesh Kartha
>Priority: Minor
> Attachments: HDFS-8738.001.patch
>
>
> When a client creates a socket connection to the Datanode and sends an empty 
> message, the datanode logs have exceptions like these:
> 2015-07-08 20:00:55,427 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> bidev17.rtp.ibm.com:50010:DataXceiver error processing unknown operation  
> src: /127.0.0.1:41508 dst: /127.0.0.1:50010
> java.io.EOFException
> at java.io.DataInputStream.readShort(DataInputStream.java:315)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
> at java.lang.Thread.run(Thread.java:745)
> 2015-07-08 20:00:56,671 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> bidev17.rtp.ibm.com:50010:DataXceiver error processing unknown operation  
> src: /127.0.0.1:41509 dst: /127.0.0.1:50010
> java.io.EOFException
> at java.io.DataInputStream.readShort(DataInputStream.java:315)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
> at java.lang.Thread.run(Thread.java:745)
> These can fill up the logs and was recently noticed with an Ambari 2.1 based 
> install which tries to check if the datanode is up.
> Can be easily reproduced with a simple Java client creating a Socket 
> connection:
> public static void main(String[] args) {
> Socket DNClient;
> try {
> DNClient = new Socket("127.0.0.1", 50010);
> DataOutputStream os= new 
> DataOutputStream(DNClient.getOutputStream());
> os.writeBytes("");
> os.close();
> } catch (UnknownHostException e) {
> // TODO Auto-generated catch block
> e.printStackTrace();
> } catch (IOException e) {
> // TODO Auto-generated catch block
> e.printStackTrace();
> }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10574) webhdfs fails with filenames including semicolons

2016-06-27 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen resolved HDFS-10574.
---
Resolution: Invalid

Ah, it appears that my test cluster was running an old version of HDFS.  My 
reproducer also succeeds on trunk.

Thanks, [~yuanbo], for looking into it and setting me straight.  I apologize 
for adding to the noise floor.

http://izquotes.com/quotes-pictures/quote-the-boy-cried-wolf-wolf-and-the-villagers-came-out-to-help-him-aesop-205890.jpg

> webhdfs fails with filenames including semicolons
> -
>
> Key: HDFS-10574
> URL: https://issues.apache.org/jira/browse/HDFS-10574
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.0
>Reporter: Bob Hansen
>
> Via webhdfs or native HDFS, we can create files with semicolons in their 
> names:
> {code}
> bhansen@::1 /tmp$ hdfs dfs -copyFromLocal /tmp/data 
> "webhdfs://localhost:50070/foo;bar"
> bhansen@::1 /tmp$ hadoop fs -ls /
> Found 1 items
> -rw-r--r--   2 bhansen supergroup  9 2016-06-24 12:20 /foo;bar
> {code}
> Attempting to fetch the file via webhdfs fails:
> {code}
> bhansen@::1 /tmp$ curl -L 
> "http://localhost:50070/webhdfs/v1/foo%3Bbar?user.name=bhansen=OPEN;
> {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  does not exist: /foo\n\tat 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)\n\tat
>  
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)\n\tat
>  
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
>  
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\n\tat
>  org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\n\tat 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\n\tat 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\n\tat 
> java.security.AccessController.doPrivileged(Native Method)\n\tat 
> javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\n\tat
>  org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\n"}}
> {code}
> It appears (from the attached TCP dump in curl_request.txt) that the 
> namenode's redirect unescapes the semicolon, and the DataNode's HTTP server 
> is splitting the request at the semicolon, and failing to find the file "foo".
> Interesting side notes:
> * In the attached dfs_copyfrom_local_traffic.txt, you can see the 
> copyFromLocal command writing the data to "foo;bar_COPYING_", which is then 
> redirected and just writes to "foo".  The subsequent rename attempts to 
> rename "foo;bar_COPYING_" to "foo;bar", but has the same parsing bug so 
> effectively renames "foo" to "foo;bar".
> Here is the full range of special characters that we initially started with 
> that led to the minimal reproducer above:
> {code}
> hdfs dfs -copyFromLocal /tmp/data webhdfs://localhost:50070/'~`!@#$%^& 
> ()-_=+|<.>]}",\\\[\{\*\?\;'\''data'
> curl -L 
> "http://localhost:50070/webhdfs/v1/%7E%60%21%40%23%24%25%5E%26+%28%29-_%3D%2B%7C%3C.%3E%5D%7D%22%2C%5C%5B%7B*%3F%3B%27data?user.name=bhansen=OPEN=0;
> {code}
> Thanks to [~anatoli.shein] for making a concise reproducer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10572) Fix TestOfflineEditsViewer#testGenerated

2016-06-27 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350932#comment-15350932
 ] 

Yiqun Lin edited comment on HDFS-10572 at 6/27/16 12:46 PM:


It seemed that the content was different in file {{edits}} and 
{{editsReparsed}}. I  parsed the binary file {{editsReparsed}} back to xml 
again. And I found there were some differences. Here are the differences, from 
txid 84 to 86.
Content in {{edits}}:
{code}
  
OP_REASSIGN_LEASE

  84
  DFSClient_NONMAPREDUCE_428966708_1
  /hard-lease-recovery-test
  HDFS_NameNode

  
  
OP_CLOSE

  85
  0
  0
  /hard-lease-recovery-test
  1
  1467028963024
  1467028960848
  512
  
  
  false
  
1073741837
11
1014
  
  
zhexuan
supergroup
420
  

  

  
OP_ADD_CACHE_POOL

  86
  pool1
  zhexuan
  staff
  493
  9223372036854775807
  2305843009213693951
  03f2daa2-e04f-4b8f-aa09-5d21e14024bd
  81

  
{code}

Content parsed from file {{editsReparsed}}:
{code}

OP_REASSIGN_LEASE

  84
  DFSClient_NONMAPREDUCE_929984910_1
  /hard-lease-recovery-test
  HDFS_NameNode

  
  
OP_SET_GENSTAMP_V2

  85
  1015

  
  
OP_REASSIGN_LEASE

  86
  HDFS_NameNode
  /hard-lease-recovery-test
  HDFS_NameNode

  
{code}
We can make use of these infos, it will help us to fix this issue.


was (Author: linyiqun):
It seemed that the content was different in file {{edits}} and 
{{editsReparsed}}. I  parsed the binary file {{editsReparsed}} back to xml 
again. And I found there were some differences. Here are the differences, from 
txid 84 to 86.
Content in {{edits}}:
{no-format}
  
OP_REASSIGN_LEASE

  84
  DFSClient_NONMAPREDUCE_428966708_1
  /hard-lease-recovery-test
  HDFS_NameNode

  
  
OP_CLOSE

  85
  0
  0
  /hard-lease-recovery-test
  1
  1467028963024
  1467028960848
  512
  
  
  false
  
1073741837
11
1014
  
  
zhexuan
supergroup
420
  

  

  
OP_ADD_CACHE_POOL

  86
  pool1
  zhexuan
  staff
  493
  9223372036854775807
  2305843009213693951
  03f2daa2-e04f-4b8f-aa09-5d21e14024bd
  81

  
{no-format}

Content parsed from file {{editsReparsed}}:
{no-format}

OP_REASSIGN_LEASE

  84
  DFSClient_NONMAPREDUCE_929984910_1
  /hard-lease-recovery-test
  HDFS_NameNode

  
  
OP_SET_GENSTAMP_V2

  85
  1015

  
  
OP_REASSIGN_LEASE

  86
  HDFS_NameNode
  /hard-lease-recovery-test
  HDFS_NameNode

  
{no-format}
We can make use of these infos, it will help us to fix this issue.

> Fix TestOfflineEditsViewer#testGenerated
> 
>
> Key: HDFS-10572
> URL: https://issues.apache.org/jira/browse/HDFS-10572
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: newbie, test
>Reporter: Xiaoyu Yao
>Assignee: Hanisha Koneru
>
> The test has been failing consistently on trunk recently. This ticket is open 
> to fix this test to avoid false alarm on Jenkins. Figure out which recent 
> commit caused this failure can be a good start. 
>  
> {code}
> ---
>  T E S T S
> ---
> Running org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 15.646 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
> testGenerated(org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer)
>   Time elapsed: 3.623 sec  <<< FAILURE!
> java.lang.AssertionError: Generated edits and reparsed (bin to XML to bin) 
> should be same
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.testGenerated(TestOfflineEditsViewer.java:125)
> Results :
> Failed tests: 
>   TestOfflineEditsViewer.testGenerated:125 Generated edits and reparsed (bin 
> to XML to bin) should be same
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10572) Fix TestOfflineEditsViewer#testGenerated

2016-06-27 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350932#comment-15350932
 ] 

Yiqun Lin commented on HDFS-10572:
--

It seemed that the content was different in file {{edits}} and 
{{editsReparsed}}. I  parsed the binary file {{editsReparsed}} back to xml 
again. And I found there were some differences. Here are the differences, from 
txid 84 to 86.
Content in {{edits}}:
{no-format}
  
OP_REASSIGN_LEASE

  84
  DFSClient_NONMAPREDUCE_428966708_1
  /hard-lease-recovery-test
  HDFS_NameNode

  
  
OP_CLOSE

  85
  0
  0
  /hard-lease-recovery-test
  1
  1467028963024
  1467028960848
  512
  
  
  false
  
1073741837
11
1014
  
  
zhexuan
supergroup
420
  

  

  
OP_ADD_CACHE_POOL

  86
  pool1
  zhexuan
  staff
  493
  9223372036854775807
  2305843009213693951
  03f2daa2-e04f-4b8f-aa09-5d21e14024bd
  81

  
{no-format}

Content parsed from file {{editsReparsed}}:
{no-format}

OP_REASSIGN_LEASE

  84
  DFSClient_NONMAPREDUCE_929984910_1
  /hard-lease-recovery-test
  HDFS_NameNode

  
  
OP_SET_GENSTAMP_V2

  85
  1015

  
  
OP_REASSIGN_LEASE

  86
  HDFS_NameNode
  /hard-lease-recovery-test
  HDFS_NameNode

  
{no-format}
We can make use of these infos, it will help us to fix this issue.

> Fix TestOfflineEditsViewer#testGenerated
> 
>
> Key: HDFS-10572
> URL: https://issues.apache.org/jira/browse/HDFS-10572
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: newbie, test
>Reporter: Xiaoyu Yao
>Assignee: Hanisha Koneru
>
> The test has been failing consistently on trunk recently. This ticket is open 
> to fix this test to avoid false alarm on Jenkins. Figure out which recent 
> commit caused this failure can be a good start. 
>  
> {code}
> ---
>  T E S T S
> ---
> Running org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 15.646 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
> testGenerated(org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer)
>   Time elapsed: 3.623 sec  <<< FAILURE!
> java.lang.AssertionError: Generated edits and reparsed (bin to XML to bin) 
> should be same
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.testGenerated(TestOfflineEditsViewer.java:125)
> Results :
> Failed tests: 
>   TestOfflineEditsViewer.testGenerated:125 Generated edits and reparsed (bin 
> to XML to bin) should be same
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10343) BlockManager#createLocatedBlocks may return blocks on failed storages

2016-06-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350925#comment-15350925
 ] 

Hadoop QA commented on HDFS-10343:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 12 new + 184 unchanged - 0 fixed = 196 total (was 184) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12813139/HDFS-10343.001.patch |
| JIRA Issue | HDFS-10343 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6901a4b547d6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 73615a7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15921/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15921/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15921/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15921/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockManager#createLocatedBlocks may return blocks on failed storages
> -
>
> 

[jira] [Commented] (HDFS-10530) BlockManager reconstruction work scheduling should correctly adhere to EC block placement policy

2016-06-27 Thread GAO Rui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350895#comment-15350895
 ] 

GAO Rui commented on HDFS-10530:


I've investigated the failure of 
{{hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped}}. Found out 
that, reconstruction works caused by block placement policy interfered 
{{Balancer}} to balance utilization of DatanodeStorages. We do could have 
conflict between block placement policy and balancer policy.  Like in the 
scenario of  
{{hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped}}, the last 
added datanode would be filled up with internal block/parity block of all block 
groups according to {{BlockPlacementPolicyRackFaultTolerant}}, while this would 
make this datanode always be recognized as {{over-utilized}} by {{Balancer}}. 
This may make {{Balancer}} could never finish it's work successfully.

I suggest we make {{Balancer}} tolerate certain percent( say 10%) of datanodes 
as {{over-utilized}}, after {{Balancer#runOneIteration()}} runs for 5 times, 
and less than 10% of datanodes is {{over-utilized}}, we make {{Balancer}} 
finish it's work successfully. [~zhz], could you share your opinions? Thank you.


> BlockManager reconstruction work scheduling should correctly adhere to EC 
> block placement policy
> 
>
> Key: HDFS-10530
> URL: https://issues.apache.org/jira/browse/HDFS-10530
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: GAO Rui
>Assignee: GAO Rui
> Attachments: HDFS-10530.1.patch
>
>
> This issue was found by [~tfukudom].
> Under RS-DEFAULT-6-3-64k EC policy, 
> 1. Create an EC file, the file was witten to all the 5 racks( 2 dns for each) 
> of the cluster.
> 2. Reconstruction work would be scheduled if the 6th rack is added. 
> 3. While adding the 7th rack or more racks will not trigger reconstruction 
> work. 
> Based on default EC block placement policy defined in 
> “BlockPlacementPolicyRackFaultTolerant.java”, EC file should be able to be 
> scheduled to distribute to 9 racks if possible.
> In *BlockManager#isPlacementPolicySatisfied(BlockInfo storedBlock)* , 
> *numReplicas* of striped blocks might should be *getRealTotalBlockNum()*, 
> instead of *getRealDataBlockNum()*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10343) BlockManager#createLocatedBlocks may return blocks on failed storages

2016-06-27 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated HDFS-10343:
---
Status: Patch Available  (was: Open)

> BlockManager#createLocatedBlocks may return blocks on failed storages
> -
>
> Key: HDFS-10343
> URL: https://issues.apache.org/jira/browse/HDFS-10343
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Kuhu Shukla
> Attachments: HDFS-10343.001.patch
>
>
> Storage state is ignored when building the machines list.  Failed storage 
> removal is not immediate so clients may be directed to bad locations.  The 
> client recovers but it's less than ideal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2016-06-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350734#comment-15350734
 ] 

Hadoop QA commented on HDFS-7859:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 17 new 
+ 1081 unchanged - 1 fixed = 1098 total (was 1082) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Class 
org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy 
defines non-transient non-serializable instance field condition  In 
ReplaceDatanodeOnFailure.java:instance field condition  In 
ReplaceDatanodeOnFailure.java |
| Failed junit tests | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.TestDFSInotifyEventInputStream |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.server.namenode.TestStartup |
|   | hadoop.hdfs.TestDFSStorageStateRecovery |
|   | hadoop.hdfs.server.namenode.TestNameNodeRecovery |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad |
|   | 

[jira] [Commented] (HDFS-8738) Limit Exceptions thrown by DataNode when a client makes socket connection and sends an empty message

2016-06-27 Thread Pan Yuxuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350667#comment-15350667
 ] 

Pan Yuxuan commented on HDFS-8738:
--

+1 for get this error when using ambari metrics monitor to check dn process.
And can we downgrade the log level to debug when the Exception instance of 
EOFException?

> Limit Exceptions thrown by DataNode when a client makes socket connection and 
> sends an empty message
> 
>
> Key: HDFS-8738
> URL: https://issues.apache.org/jira/browse/HDFS-8738
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Rajesh Kartha
>Assignee: Rajesh Kartha
>Priority: Minor
> Attachments: HDFS-8738.001.patch
>
>
> When a client creates a socket connection to the Datanode and sends an empty 
> message, the datanode logs have exceptions like these:
> 2015-07-08 20:00:55,427 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> bidev17.rtp.ibm.com:50010:DataXceiver error processing unknown operation  
> src: /127.0.0.1:41508 dst: /127.0.0.1:50010
> java.io.EOFException
> at java.io.DataInputStream.readShort(DataInputStream.java:315)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
> at java.lang.Thread.run(Thread.java:745)
> 2015-07-08 20:00:56,671 ERROR datanode.DataNode (DataXceiver.java:run(278)) - 
> bidev17.rtp.ibm.com:50010:DataXceiver error processing unknown operation  
> src: /127.0.0.1:41509 dst: /127.0.0.1:50010
> java.io.EOFException
> at java.io.DataInputStream.readShort(DataInputStream.java:315)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:227)
> at java.lang.Thread.run(Thread.java:745)
> These can fill up the logs and was recently noticed with an Ambari 2.1 based 
> install which tries to check if the datanode is up.
> Can be easily reproduced with a simple Java client creating a Socket 
> connection:
> public static void main(String[] args) {
> Socket DNClient;
> try {
> DNClient = new Socket("127.0.0.1", 50010);
> DataOutputStream os= new 
> DataOutputStream(DNClient.getOutputStream());
> os.writeBytes("");
> os.close();
> } catch (UnknownHostException e) {
> // TODO Auto-generated catch block
> e.printStackTrace();
> } catch (IOException e) {
> // TODO Auto-generated catch block
> e.printStackTrace();
> }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10581:
---
Status: Patch Available  (was: Open)

A trivial patch uploaded.

> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10581:
---
Labels: ui web-ui  (was: )

> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: ui, web-ui
> Attachments: HDFS-10581.001.patch, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10581:
---
Attachment: HDFS-10581.001.patch

> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
> Attachments: HDFS-10581.001.patch, after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10581:
---
Summary: Redundant table on Datanodes page when no nodes under 
decomissioning  (was: Redundant table on Datanodes page when there is no nodes 
under decomissioning)

> Redundant table on Datanodes page when no nodes under decomissioning
> 
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
> Attachments: after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Redundant table on Datanodes page when there is no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10581:
---
Description: A minor user experience Improvement on namenode UI. Propose to 
improve it from [before.jpg] to [after.jpg].  (was: A minor user experience 
Improvement on namenode UI.)

> Redundant table on Datanodes page when there is no nodes under decomissioning
> -
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
> Attachments: after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [before.jpg] to [after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Redundant table on Datanodes page when there is no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10581:
---
Description: A minor user experience Improvement on namenode UI. Propose to 
improve it from [^before.jpg] to [^after.jpg].  (was: A minor user experience 
Improvement on namenode UI. Propose to improve it from [before.jpg] to 
[after.jpg].)

> Redundant table on Datanodes page when there is no nodes under decomissioning
> -
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
> Attachments: after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI. Propose to improve it 
> from [^before.jpg] to [^after.jpg].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Redundant table on Datanodes page when there is no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10581:
---
Attachment: after.jpg

> Redundant table on Datanodes page when there is no nodes under decomissioning
> -
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
> Attachments: after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Redundant table on Datanodes page when there is no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10581:
---
Attachment: before.jpg

> Redundant table on Datanodes page when there is no nodes under decomissioning
> -
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
> Attachments: after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10581) Redundant table on Datanodes page when there is no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned HDFS-10581:
--

Assignee: Weiwei Yang

> Redundant table on Datanodes page when there is no nodes under decomissioning
> -
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
> Attachments: after.jpg, before.jpg
>
>
> A minor user experience Improvement on namenode UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10581) Redundant table on Datanodes page when there is no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10581:
---
Description: A minor user experience Improvement on namenode UI.  (was: A 
minor user experience Improvement on namenode UI)

> Redundant table on Datanodes page when there is no nodes under decomissioning
> -
>
> Key: HDFS-10581
> URL: https://issues.apache.org/jira/browse/HDFS-10581
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, ui
>Reporter: Weiwei Yang
>Priority: Trivial
>
> A minor user experience Improvement on namenode UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10581) Redundant table on Datanodes page when there is no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-10581:
--

 Summary: Redundant table on Datanodes page when there is no nodes 
under decomissioning
 Key: HDFS-10581
 URL: https://issues.apache.org/jira/browse/HDFS-10581
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs, ui
Reporter: Weiwei Yang
Priority: Trivial


A minor user experience Improvement on namenode UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10574) webhdfs fails with filenames including semicolons

2016-06-27 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350568#comment-15350568
 ] 

Yuanbo Liu commented on HDFS-10574:
---

[~bobhansen] I wrote a test case and test it on Hadoop-trunk and Hadoop-2.7.0. 
Unfortunately I did not encounter this issue. Have you ever installed Knox in 
your cluster?

> webhdfs fails with filenames including semicolons
> -
>
> Key: HDFS-10574
> URL: https://issues.apache.org/jira/browse/HDFS-10574
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.0
>Reporter: Bob Hansen
>
> Via webhdfs or native HDFS, we can create files with semicolons in their 
> names:
> {code}
> bhansen@::1 /tmp$ hdfs dfs -copyFromLocal /tmp/data 
> "webhdfs://localhost:50070/foo;bar"
> bhansen@::1 /tmp$ hadoop fs -ls /
> Found 1 items
> -rw-r--r--   2 bhansen supergroup  9 2016-06-24 12:20 /foo;bar
> {code}
> Attempting to fetch the file via webhdfs fails:
> {code}
> bhansen@::1 /tmp$ curl -L 
> "http://localhost:50070/webhdfs/v1/foo%3Bbar?user.name=bhansen=OPEN;
> {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFoundException","message":"File
>  does not exist: /foo\n\tat 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)\n\tat
>  
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)\n\tat
>  
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)\n\tat
>  
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat
>  
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\n\tat
>  org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\n\tat 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\n\tat 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\n\tat 
> java.security.AccessController.doPrivileged(Native Method)\n\tat 
> javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\n\tat
>  org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\n"}}
> {code}
> It appears (from the attached TCP dump in curl_request.txt) that the 
> namenode's redirect unescapes the semicolon, and the DataNode's HTTP server 
> is splitting the request at the semicolon, and failing to find the file "foo".
> Interesting side notes:
> * In the attached dfs_copyfrom_local_traffic.txt, you can see the 
> copyFromLocal command writing the data to "foo;bar_COPYING_", which is then 
> redirected and just writes to "foo".  The subsequent rename attempts to 
> rename "foo;bar_COPYING_" to "foo;bar", but has the same parsing bug so 
> effectively renames "foo" to "foo;bar".
> Here is the full range of special characters that we initially started with 
> that led to the minimal reproducer above:
> {code}
> hdfs dfs -copyFromLocal /tmp/data webhdfs://localhost:50070/'~`!@#$%^& 
> ()-_=+|<.>]}",\\\[\{\*\?\;'\''data'
> curl -L 
> "http://localhost:50070/webhdfs/v1/%7E%60%21%40%23%24%25%5E%26+%28%29-_%3D%2B%7C%3C.%3E%5D%7D%22%2C%5C%5B%7B*%3F%3B%27data?user.name=bhansen=OPEN=0;
> {code}
> Thanks to [~anatoli.shein] for making a concise reproducer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org