[jira] [Updated] (HDFS-8769) Erasure Coding: unit test for SequentialBlockGroupIdGenerator

2015-07-24 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8769:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7285
   Status: Resolved  (was: Patch Available)

+1. Committed to feature branch. Thanks [~rakeshr] for contribution!

> Erasure Coding: unit test for SequentialBlockGroupIdGenerator
> -
>
> Key: HDFS-8769
> URL: https://issues.apache.org/jira/browse/HDFS-8769
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Rakesh R
> Fix For: HDFS-7285
>
> Attachments: HDFS-8769-HDFS-7285-00.patch, 
> HDFS-8769-HDFS-7285-01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641341#comment-14641341
 ] 

Hadoop QA commented on HDFS-8180:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 36s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 36s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 38s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 29s | The applied patch generated  2 
new checkstyle issues (total was 0, now 2). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 19s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 21s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 21s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 160m 36s | Tests failed in hadoop-hdfs. |
| | | 227m 55s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
|   | hadoop.hdfs.TestDistributedFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747089/HDFS-8180-3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 83fe34a |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11835/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11835/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11835/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11835/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11835/console |


This message was automatically generated.

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Jakob Homan
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8816) Improve visualization for the Datanode tab in the NN UI

2015-07-24 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641293#comment-14641293
 ] 

Ravi Prakash commented on HDFS-8816:


Also, I'm not a fan of using moment.js to decrease the granularity of the "Last 
contact" information. If I had a timestamp there, it would make it a lot easier 
for me to grep through logs.

> Improve visualization for the Datanode tab in the NN UI
> ---
>
> Key: HDFS-8816
> URL: https://issues.apache.org/jira/browse/HDFS-8816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8816.000.patch, HDFS-8816.001.patch, 
> HDFS-8816.002.patch, HDFS-8816.png, Screen Shot 2015-07-23 at 10.24.24 AM.png
>
>
> The information of the datanode tab in the NN UI is clogged. This jira 
> proposes to improve the visualization of the datanode tab in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8816) Improve visualization for the Datanode tab in the NN UI

2015-07-24 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641280#comment-14641280
 ] 

Ravi Prakash commented on HDFS-8816:


- With DIFP "Failed Volumes" would be very useful to have. Is there some other 
place in the UI we can get that information?
- Hopefully there'll be some way to sort based on "Admin state" after HDFS-6407?
- Although I don't particularly care for the "Non DFS Used" myself, could you 
please confirm you meant to remove it?

> Improve visualization for the Datanode tab in the NN UI
> ---
>
> Key: HDFS-8816
> URL: https://issues.apache.org/jira/browse/HDFS-8816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8816.000.patch, HDFS-8816.001.patch, 
> HDFS-8816.002.patch, HDFS-8816.png, Screen Shot 2015-07-23 at 10.24.24 AM.png
>
>
> The information of the datanode tab in the NN UI is clogged. This jira 
> proposes to improve the visualization of the datanode tab in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8816) Improve visualization for the Datanode tab in the NN UI

2015-07-24 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-8816:
---
Attachment: HDFS-8816.png

Hi Haohui!

At a lower zoom, the capacity becomes indented awkwardly. Also, there's no 
indication of %ge in the capacity.

> Improve visualization for the Datanode tab in the NN UI
> ---
>
> Key: HDFS-8816
> URL: https://issues.apache.org/jira/browse/HDFS-8816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8816.000.patch, HDFS-8816.001.patch, 
> HDFS-8816.002.patch, HDFS-8816.png, Screen Shot 2015-07-23 at 10.24.24 AM.png
>
>
> The information of the datanode tab in the NN UI is clogged. This jira 
> proposes to improve the visualization of the datanode tab in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8816) Improve visualization for the Datanode tab in the NN UI

2015-07-24 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641248#comment-14641248
 ] 

Ravi Prakash commented on HDFS-8816:


Thanks for the change Haohui! I'll review and get back today

> Improve visualization for the Datanode tab in the NN UI
> ---
>
> Key: HDFS-8816
> URL: https://issues.apache.org/jira/browse/HDFS-8816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8816.000.patch, HDFS-8816.001.patch, 
> HDFS-8816.002.patch, Screen Shot 2015-07-23 at 10.24.24 AM.png
>
>
> The information of the datanode tab in the NN UI is clogged. This jira 
> proposes to improve the visualization of the datanode tab in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8695) OzoneHandler : Add Bucket REST Interface

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641241#comment-14641241
 ] 

Hadoop QA commented on HDFS-8695:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m  7s | Findbugs (version ) appears to 
be broken on HDFS-7240. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 38s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 17s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 32s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  3s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 160m 57s | Tests failed in hadoop-hdfs. |
| | | 201m 44s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747098/hdfs-8695-HDFS-7240.002.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7240 / ef128ee |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11834/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11834/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11834/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11834/console |


This message was automatically generated.

> OzoneHandler : Add Bucket REST Interface
> 
>
> Key: HDFS-8695
> URL: https://issues.apache.org/jira/browse/HDFS-8695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8695-HDFS-7240.001.patch, 
> hdfs-8695-HDFS-7240.002.patch
>
>
> Add Bucket REST interface into Ozone server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8820) Enable RPC Congestion control by default

2015-07-24 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641240#comment-14641240
 ] 

Arpit Agarwal commented on HDFS-8820:
-

Thanks [~mingma].

bq. Are you going to enable both fair call queue and rpc backoff?
No I don't intend to enable fair call queue by default for now.

bq. Optionally we might want to adjust dfs.client.retry.max.attempts setting so 
that it can retry more in the case of NN congestion.
Interesting, thanks for the hint. I agree a maximum of 90 seconds may be a 
little short but not sure what is the right default for a range of deployments. 
We could file a separate Jira to change this default although I probably won't 
be able to test out alternatives myself. If anyone else is curious about the 
math: initial backoff  = 500ms, maximum backoff = 15s. sum(.5, 1, 2, 4, 8, 15 x 
5) ~ 90 seconds.

Initially we'll change the default for just HDFS so I moved this under HDFS.

> Enable RPC Congestion control by default
> 
>
> Key: HDFS-8820
> URL: https://issues.apache.org/jira/browse/HDFS-8820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> We propose enabling RPC congestion control introduced by HADOOP-10597 by 
> default.
> We enabled it on a couple of large clusters a few weeks ago and it has helped 
> keep the namenodes responsive under load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7858) Improve HA Namenode Failover detection on the client

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641238#comment-14641238
 ] 

Hadoop QA commented on HDFS-7858:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  21m 46s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 35s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 43s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  0s | Site still builds. |
| {color:green}+1{color} | checkstyle |   2m  2s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m 26s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 16s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 158m 42s | Tests failed in hadoop-hdfs. |
| | | 231m 57s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.TestDistributedFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747095/HDFS-7858.10.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / d19d187 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11833/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11833/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11833/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11833/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11833/console |


This message was automatically generated.

> Improve HA Namenode Failover detection on the client
> 
>
> Key: HDFS-7858
> URL: https://issues.apache.org/jira/browse/HDFS-7858
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7858.1.patch, HDFS-7858.10.patch, 
> HDFS-7858.2.patch, HDFS-7858.2.patch, HDFS-7858.3.patch, HDFS-7858.4.patch, 
> HDFS-7858.5.patch, HDFS-7858.6.patch, HDFS-7858.7.patch, HDFS-7858.8.patch, 
> HDFS-7858.9.patch
>
>
> In an HA deployment, Clients are configured with the hostnames of both the 
> Active and Standby Namenodes.Clients will first try one of the NNs 
> (non-deterministically) and if its a standby NN, then it will respond to the 
> client to retry the request on the other Namenode.
> If the client happens to talks to the Standby first, and the standby is 
> undergoing some GC / is busy, then those clients might not get a response 
> soon enough to try the other NN.
> Proposed Approach to solve this :
> 1) Since Zookeeper is already used as the failover controller, the clients 
> could talk to ZK and find out which is the active namenode before contacting 
> it.
> 2) Long-lived DFSClients would have a ZK watch configured which fires when 
> there is a failover so they do not have to query ZK everytime to find out the 
> active NN
> 2) Clients can also cache the last active NN in the user's home directory 
> (~/.lastNN) so that short-lived clients can try that Namenode first before 
> querying ZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HDFS-8820) Enable RPC Congestion control by default

2015-07-24 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal moved HADOOP-12250 to HDFS-8820:
--

Target Version/s: 2.8.0  (was: 2.8.0)
 Component/s: (was: ipc)
  namenode
 Key: HDFS-8820  (was: HADOOP-12250)
 Project: Hadoop HDFS  (was: Hadoop Common)

> Enable RPC Congestion control by default
> 
>
> Key: HDFS-8820
> URL: https://issues.apache.org/jira/browse/HDFS-8820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> We propose enabling RPC congestion control introduced by HADOOP-10597 by 
> default.
> We enabled it on a couple of large clusters a few weeks ago and it has helped 
> keep the namenodes responsive under load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8768) Erasure Coding: block group ID displayed in WebUI is not consistent with fsck

2015-07-24 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8768:

Parent Issue: HDFS-8031  (was: HDFS-7285)

> Erasure Coding: block group ID displayed in WebUI is not consistent with fsck
> -
>
> Key: HDFS-8768
> URL: https://issues.apache.org/jira/browse/HDFS-8768
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: GAO Rui
> Attachments: Screen Shot 2015-07-14 at 15.33.08.png, 
> screen-shot-with-HDFS-8779-patch.PNG
>
>
> This is duplicated by [HDFS-8779].
> For example, In WebUI( usually, namenode port: 50070) , one Erasure Code   
> file with one block group was displayed as the attached screenshot [^Screen 
> Shot 2015-07-14 at 15.33.08.png]. But, with fsck command, the block group of 
> the same file was displayed like: {{0. 
> BP-1130999596-172.23.38.10-1433791629728:blk_-9223372036854740160_3384 
> len=6438256640}}
> After checking block file names in datanodes, we believe WebUI may have some 
> problem with Erasure Code block group display.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8768) Erasure Coding: block group ID displayed in WebUI is not consistent with fsck

2015-07-24 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641204#comment-14641204
 ] 

Zhe Zhang commented on HDFS-8768:
-

I think we should close this JIRA and rely on the solution of HDFS-8779. We 
need a solution for randomly generated IDs anyway. [~sinall] [~walter.k.su] let 
me know if you agree.

> Erasure Coding: block group ID displayed in WebUI is not consistent with fsck
> -
>
> Key: HDFS-8768
> URL: https://issues.apache.org/jira/browse/HDFS-8768
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: GAO Rui
> Attachments: Screen Shot 2015-07-14 at 15.33.08.png, 
> screen-shot-with-HDFS-8779-patch.PNG
>
>
> This is duplicated by [HDFS-8779].
> For example, In WebUI( usually, namenode port: 50070) , one Erasure Code   
> file with one block group was displayed as the attached screenshot [^Screen 
> Shot 2015-07-14 at 15.33.08.png]. But, with fsck command, the block group of 
> the same file was displayed like: {{0. 
> BP-1130999596-172.23.38.10-1433791629728:blk_-9223372036854740160_3384 
> len=6438256640}}
> After checking block file names in datanodes, we believe WebUI may have some 
> problem with Erasure Code block group display.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8199) Erasure Coding: System test of creating ECZone and EC files.

2015-07-24 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641197#comment-14641197
 ] 

Zhe Zhang commented on HDFS-8199:
-

Moving system test JIRAs as follow-ons.

> Erasure Coding: System test of creating ECZone and EC files.
> 
>
> Key: HDFS-8199
> URL: https://issues.apache.org/jira/browse/HDFS-8199
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Yong Zhang
> Attachments: HDFS-8199.000.patch
>
>
> System test of creating ECZone and EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8199) Erasure Coding: System test of creating ECZone and EC files.

2015-07-24 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8199:

Parent Issue: HDFS-8031  (was: HDFS-7285)

> Erasure Coding: System test of creating ECZone and EC files.
> 
>
> Key: HDFS-8199
> URL: https://issues.apache.org/jira/browse/HDFS-8199
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Yong Zhang
> Attachments: HDFS-8199.000.patch
>
>
> System test of creating ECZone and EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8198) Erasure Coding: system test of TeraSort

2015-07-24 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8198:

Parent Issue: HDFS-8031  (was: HDFS-7285)

> Erasure Coding: system test of TeraSort
> ---
>
> Key: HDFS-8198
> URL: https://issues.apache.org/jira/browse/HDFS-8198
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>
> Functional system test of TeraSort on EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8198) Erasure Coding: system test of TeraSort

2015-07-24 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641198#comment-14641198
 ] 

Zhe Zhang commented on HDFS-8198:
-

Moving system test JIRAs as follow-ons.

> Erasure Coding: system test of TeraSort
> ---
>
> Key: HDFS-8198
> URL: https://issues.apache.org/jira/browse/HDFS-8198
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>
> Functional system test of TeraSort on EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8197) [umbrella] System tests for EC feature

2015-07-24 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641193#comment-14641193
 ] 

Zhe Zhang commented on HDFS-8197:
-

Moving system test JIRAs as follow-ons.

> [umbrella] System tests for EC feature
> --
>
> Key: HDFS-8197
> URL: https://issues.apache.org/jira/browse/HDFS-8197
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: system-tests, test
>
> This is umbrella JIRA for system test of EC feature.
> All sub-tasks and test cases are listed under this ticket. All items which 
> are assumed to be tested are here.
> * Create/Delete EC File
> * Create/Delete ECZone
> * teragen against EC files
> * terasort against EC files
> * teravalidate against EC files



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8425) [umbrella] Performance tuning, investigation and optimization for erasure coding

2015-07-24 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8425:

Parent Issue: HDFS-8031  (was: HDFS-7285)

> [umbrella] Performance tuning, investigation and optimization for erasure 
> coding
> 
>
> Key: HDFS-8425
> URL: https://issues.apache.org/jira/browse/HDFS-8425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: testClientWriteReadFile_v1.pdf
>
>
> This {{umbrella}} jira aims to track performance tuning, investigation and 
> optimization for erasure coding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8425) [umbrella] Performance tuning, investigation and optimization for erasure coding

2015-07-24 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641195#comment-14641195
 ] 

Zhe Zhang commented on HDFS-8425:
-

Moving system test JIRAs as follow-ons. Let me know if you have other opinions. 
Thanks!

> [umbrella] Performance tuning, investigation and optimization for erasure 
> coding
> 
>
> Key: HDFS-8425
> URL: https://issues.apache.org/jira/browse/HDFS-8425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: GAO Rui
> Attachments: testClientWriteReadFile_v1.pdf
>
>
> This {{umbrella}} jira aims to track performance tuning, investigation and 
> optimization for erasure coding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8197) [umbrella] System tests for EC feature

2015-07-24 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8197:

Parent Issue: HDFS-8031  (was: HDFS-7285)

> [umbrella] System tests for EC feature
> --
>
> Key: HDFS-8197
> URL: https://issues.apache.org/jira/browse/HDFS-8197
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: system-tests, test
>
> This is umbrella JIRA for system test of EC feature.
> All sub-tasks and test cases are listed under this ticket. All items which 
> are assumed to be tested are here.
> * Create/Delete EC File
> * Create/Delete ECZone
> * teragen against EC files
> * terasort against EC files
> * teravalidate against EC files



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8762) Erasure Coding: the log of each streamer should show its index

2015-07-24 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8762:

Parent Issue: HDFS-8031  (was: HDFS-7285)

> Erasure Coding: the log of each streamer should show its index
> --
>
> Key: HDFS-8762
> URL: https://issues.apache.org/jira/browse/HDFS-8762
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8762-HDFS-7285-001.patch
>
>
> The log in {{DataStreamer}} doesn't show which streamer it's generated from. 
> In order to make log information more convenient for debugging, each log 
> should include the index of the streamer it's generated from. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8180:
--
Status: Patch Available  (was: In Progress)

Submitting patch to be picked up by Jenkins.  For some reason the submit patch 
button wasn't showing up for me until I assigned the JIRA to myself.  Not sure 
why.

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Jakob Homan
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8769) Erasure Coding: unit test for SequentialBlockGroupIdGenerator

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641117#comment-14641117
 ] 

Hadoop QA commented on HDFS-8769:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m 21s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  3s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 34s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 14s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 54s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 42s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m 37s | The patch appears to introduce 8 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 21s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 174m 49s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 16s | Tests passed in 
hadoop-hdfs-client. |
| | | 221m 30s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| FindBugs | module:hadoop-hdfs-client |
| Failed unit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747068/HDFS-8769-HDFS-7285-01.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / c2c26e6 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11830/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11830/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11830/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11830/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11830/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11830/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11830/console |


This message was automatically generated.

> Erasure Coding: unit test for SequentialBlockGroupIdGenerator
> -
>
> Key: HDFS-8769
> URL: https://issues.apache.org/jira/browse/HDFS-8769
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Rakesh R
> Attachments: HDFS-8769-HDFS-7285-00.patch, 
> HDFS-8769-HDFS-7285-01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8785) TestDistributedFileSystem is failing in trunk

2015-07-24 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641116#comment-14641116
 ] 

Colin Patrick McCabe commented on HDFS-8785:


I am +1 on the latest patch.  I think it might be even better to simply write 
in a loop until we get stuck (rather than assuming some fixed upper limit on 
socket buffering), but this is certainly an improvement.  The test succeeds for 
me with this patch.

> TestDistributedFileSystem is failing in trunk
> -
>
> Key: HDFS-8785
> URL: https://issues.apache.org/jira/browse/HDFS-8785
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Attachments: HDFS-8785.00.patch, HDFS-8785.01.patch, 
> HDFS-8785.02.patch
>
>
> A newly added test case 
> {{TestDistributedFileSystem#testDFSClientPeerWriteTimeout}} is failing in 
> trunk.
> e.g. run
> https://builds.apache.org/job/PreCommit-HDFS-Build/11716/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testDFSClientPeerWriteTimeout/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reassigned HDFS-8180:
-

Assignee: Jakob Homan  (was: Santhosh G Nayak)

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Jakob Homan
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8180:
--
Flags: Patch

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8180:
--
Issue Type: Improvement  (was: New Feature)

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8748) ACL permission check does not union groups to determine effective permissions

2015-07-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-8748.
-
Resolution: Won't Fix

As I stated in my last comment, the high level design goal of HDFS ACLs was to 
match POSIX semantics as closely as possible.  I'm going to resolve this as 
won't fix, because the current implemented behavior matches the latest quote 
from the POSIX spec, even though it doesn't match the HDFS-4685 design doc.

[~scott_o], I really appreciate your diligence tracking down the relevant spec. 
 Thank you!

> ACL permission check does not union groups to determine effective permissions
> -
>
> Key: HDFS-8748
> URL: https://issues.apache.org/jira/browse/HDFS-8748
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Scott Opell
>  Labels: acl, permission
> Attachments: HDFS_8748.patch
>
>
> In the ACL permission checking routine, the implemented named group section 
> does not match the design document.
> In the design document, its shown in the pseudo-code that if the requester is 
> not the owner or a named user, then the applicable groups are unioned 
> together to form effective permissions for the requester.
> Instead, the current implementation will search for the first group that 
> grants access and will use that. It will not union the permissions together.
> Here is the design document's description of the desired behavior
> {quote}
> If the user is a member of the file's group or at least one group for which 
> there is a
> named group entry in the ACL, then effective permissions are calculated from 
> groups.
> This is the union of the file group permissions (if the user is a member of 
> the file group)
> and all named group entries matching the user's groups. For example, consider 
> a user
> that is a member of 2 groups: sales and execs. The user is not the file 
> owner, and the
> ACL contains no named user entries. The ACL contains named group entries for 
> both
> groups as follows: group:sales:r­­\-\-, group:execs:\-­w\-­. In this case, 
> the user's effective
> permissions are rw­-.
> {quote}
>  
> ??https://issues.apache.org/jira/secure/attachment/12627729/HDFS-ACLs-Design-3.pdf
>  page 10??
> The design document's algorithm matches that description:
> *Design Document Algorithm*
> {code:title=DesignDocument}
> if (user == fileOwner) {
> effectivePermissions = aclEntries.getOwnerPermissions()
> } else if (user ∈ aclEntries.getNamedUsers()) {
> effectivePermissions = aclEntries.getNamedUserPermissions(user)
> } else if (userGroupsInAcl != ∅) {
> effectivePermissions = ∅
> if (fileGroup ∈ userGroupsInAcl) {
> effectivePermissions = effectivePermissions ∪
> aclEntries.getGroupPermissions()
> }
> for ({group | group ∈ userGroupsInAcl}) {
> effectivePermissions = effectivePermissions ∪
> aclEntries.getNamedGroupPermissions(group)
> }
> } else {
> effectivePermissions = aclEntries.getOthersPermissions()
> }
> {code}
> ??https://issues.apache.org/jira/secure/attachment/12627729/HDFS-ACLs-Design-3.pdf
>  page 9??
> The current implementation does NOT match the description.
> *Current Trunk*
> {code:title=FSPermissionChecker.java}
> // Use owner entry from permission bits if user is owner.
> if (getUser().equals(inode.getUserName())) {
>   if (mode.getUserAction().implies(access)) {
> return;
>   }
>   foundMatch = true;
> }
> // Check named user and group entries if user was not denied by owner 
> entry.
> if (!foundMatch) {
>   for (int pos = 0, entry; pos < aclFeature.getEntriesSize(); pos++) {
> entry = aclFeature.getEntryAt(pos);
> if (AclEntryStatusFormat.getScope(entry) == AclEntryScope.DEFAULT) {
>   break;
> }
> AclEntryType type = AclEntryStatusFormat.getType(entry);
> String name = AclEntryStatusFormat.getName(entry);
> if (type == AclEntryType.USER) {
>   // Use named user entry with mask from permission bits applied if 
> user
>   // matches name.
>   if (getUser().equals(name)) {
> FsAction masked = AclEntryStatusFormat.getPermission(entry).and(
> mode.getGroupAction());
> if (masked.implies(access)) {
>   return;
> }
> foundMatch = true;
> break;
>   }
> } else if (type == AclEntryType.GROUP) {
>   // Use group entry (unnamed or named) with mask from permission bits
>   // applied if user is a member and entry grants access.  If user is 
> a
>   // member of multiple groups that have entries that grant access, 
> then
>   // it doesn't mat

[jira] [Commented] (HDFS-4131) Add capability to namenode to get snapshot diff

2015-07-24 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641078#comment-14641078
 ] 

Yongjun Zhang commented on HDFS-4131:
-

Hi [~jingzhao],

Thanks for your earlier work on this. I have a question. It's said that we 
won't guarantee the order of the entries in the diff report at the end of the 
document page:

http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html

May I know if it's possible to keep the order? Or it's not possible with 
current data structure/algorithm?

Thanks a lot.


> Add capability to namenode to get snapshot diff
> ---
>
> Key: HDFS-4131
> URL: https://issues.apache.org/jira/browse/HDFS-4131
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: Snapshot (HDFS-2802)
>Reporter: Suresh Srinivas
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4131.001.patch, HDFS-4131.002.patch, 
> HDFS-4131.003.patch, HDFS-4131.004.patch, HDFS-4131.005.patch, 
> HDFS-4131.006.patch
>
>
> This jira provides internal data structures and computation processes for 
> calculating and representing the diff between two snapshots, or the diff 
> between a snapshot and the current tree. 
> Specifically, a new method getSnapshotDiffReport(Path, String, String) is 
> added to FSNamesystem to compute the snapshot diff. The snapshot diff is 
> represented as a SnapshotDiffReport internally. In later jiras we will add 
> support to present the SnapshotDiffReport to end users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7858) Improve HA Namenode Failover detection on the client

2015-07-24 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641067#comment-14641067
 ] 

Jing Zhao commented on HDFS-7858:
-

bq. I need this to get name of the proxy which was successful (so i can key 
into the targetProxies map). CallResult catches the exception and sets it as 
the result.

Yeah, I did notice the exception has been captured by CallResult. But maybe we 
can use a future-->proxy map here? In this way we do not need to have a wrapper 
class like CallResult so maybe the code can be further simplified.

> Improve HA Namenode Failover detection on the client
> 
>
> Key: HDFS-7858
> URL: https://issues.apache.org/jira/browse/HDFS-7858
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7858.1.patch, HDFS-7858.10.patch, 
> HDFS-7858.2.patch, HDFS-7858.2.patch, HDFS-7858.3.patch, HDFS-7858.4.patch, 
> HDFS-7858.5.patch, HDFS-7858.6.patch, HDFS-7858.7.patch, HDFS-7858.8.patch, 
> HDFS-7858.9.patch
>
>
> In an HA deployment, Clients are configured with the hostnames of both the 
> Active and Standby Namenodes.Clients will first try one of the NNs 
> (non-deterministically) and if its a standby NN, then it will respond to the 
> client to retry the request on the other Namenode.
> If the client happens to talks to the Standby first, and the standby is 
> undergoing some GC / is busy, then those clients might not get a response 
> soon enough to try the other NN.
> Proposed Approach to solve this :
> 1) Since Zookeeper is already used as the failover controller, the clients 
> could talk to ZK and find out which is the active namenode before contacting 
> it.
> 2) Long-lived DFSClients would have a ZK watch configured which fires when 
> there is a failover so they do not have to query ZK everytime to find out the 
> active NN
> 2) Clients can also cache the last active NN in the user's home directory 
> (~/.lastNN) so that short-lived clients can try that Namenode first before 
> querying ZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8798) Erasure Coding: fix DFSStripedInputStream/DFSStripedOutputStream re-fetch token when expired

2015-07-24 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-8798:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7285
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks Walter for the contribution!

> Erasure Coding: fix DFSStripedInputStream/DFSStripedOutputStream re-fetch 
> token when expired
> 
>
> Key: HDFS-8798
> URL: https://issues.apache.org/jira/browse/HDFS-8798
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Fix For: HDFS-7285
>
> Attachments: HDFS-8798-HDFS-7285.02.patch, HDFS-8798.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8735) Inotify : All events classes should implement toString() API.

2015-07-24 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641044#comment-14641044
 ] 

Colin Patrick McCabe commented on HDFS-8735:


Thanks, [~surendrasingh] and [~ajisakaa].

> Inotify : All events classes should implement toString() API.
> -
>
> Key: HDFS-8735
> URL: https://issues.apache.org/jira/browse/HDFS-8735
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-8735-002.patch, HDFS-8735-003.patch, 
> HDFS-8735-004.patch, HDFS-8735.01.patch, HDFS-8735.patch
>
>
> Event classes is used by client, it’s good to implement toString() API.
> {code}
> for(Event event : events){
>   System.out.println(event.toString());
> }
> {code}
> This will give output like this
> {code}
> org.apache.hadoop.hdfs.inotify.Event$CreateEvent@6916d97d
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8695) OzoneHandler : Add Bucket REST Interface

2015-07-24 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-8695:
---
Attachment: hdfs-8695-HDFS-7240.002.patch

updated based on Code Review comments from [~kanaka]

* Throws an exception if remove acls are specified during bucket create
*  updated javadocs where appropriate 


> OzoneHandler : Add Bucket REST Interface
> 
>
> Key: HDFS-8695
> URL: https://issues.apache.org/jira/browse/HDFS-8695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8695-HDFS-7240.001.patch, 
> hdfs-8695-HDFS-7240.002.patch
>
>
> Add Bucket REST interface into Ozone server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7858) Improve HA Namenode Failover detection on the client

2015-07-24 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HDFS-7858:
--
Attachment: HDFS-7858.10.patch

Thanks again for the review [~jingzhao],

Uploading patch addressing your suggestions..

bq. do we need the latch in RequestHedgingInvocationHandler#invoke ?
No necessarily.. just wanted to ensure all requests are started almost at the 
same time. But yeah, since the size of the thread pool is equal to the number 
of proxies, it should technically start simultaneously… Ive Removed it

w.r.t the requestTimeout. ..
Hmmm.. Agreed, its not really necessary, (But i think we have to doc that if 
this is refactored as a general Handler where are not sure of the underlying 
Client/Server protocol and assumptions, a bounding timeout would be 
good/necessary)

bq. We can use the ExecutionException thrown by callResultFuture.get() to get 
the exception thrown by the invocation.
So, if you notice, I have a {{CallResult}} object which is what is actually 
returned by classResultFuture.get(). I need this to get name of the proxy which 
was successful (so i can key into the targetProxies map). CallResult catches 
the exception and sets it as the result.


> Improve HA Namenode Failover detection on the client
> 
>
> Key: HDFS-7858
> URL: https://issues.apache.org/jira/browse/HDFS-7858
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7858.1.patch, HDFS-7858.10.patch, 
> HDFS-7858.2.patch, HDFS-7858.2.patch, HDFS-7858.3.patch, HDFS-7858.4.patch, 
> HDFS-7858.5.patch, HDFS-7858.6.patch, HDFS-7858.7.patch, HDFS-7858.8.patch, 
> HDFS-7858.9.patch
>
>
> In an HA deployment, Clients are configured with the hostnames of both the 
> Active and Standby Namenodes.Clients will first try one of the NNs 
> (non-deterministically) and if its a standby NN, then it will respond to the 
> client to retry the request on the other Namenode.
> If the client happens to talks to the Standby first, and the standby is 
> undergoing some GC / is busy, then those clients might not get a response 
> soon enough to try the other NN.
> Proposed Approach to solve this :
> 1) Since Zookeeper is already used as the failover controller, the clients 
> could talk to ZK and find out which is the active namenode before contacting 
> it.
> 2) Long-lived DFSClients would have a ZK watch configured which fires when 
> there is a failover so they do not have to query ZK everytime to find out the 
> active NN
> 2) Clients can also cache the last active NN in the user's home directory 
> (~/.lastNN) so that short-lived clients can try that Namenode first before 
> querying ZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8805) Archival Storage: getStoragePolicy should not need superuser privilege

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640948#comment-14640948
 ] 

Hadoop QA commented on HDFS-8805:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m  5s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   1m 35s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747087/HDFS-8805-002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d19d187 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11832/console |


This message was automatically generated.

> Archival Storage: getStoragePolicy should not need superuser privilege
> --
>
> Key: HDFS-8805
> URL: https://issues.apache.org/jira/browse/HDFS-8805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, namenode
>Reporter: Hui Zheng
>Assignee: Brahma Reddy Battula
> Fix For: 2.6.0
>
> Attachments: HDFS-8805-002.patch, HDFS-8805.patch
>
>
> The result of getStoragePolicy command is always 'unspecified' even we has 
> set a StoragePolicy on a directory.But the real placement of blocks is 
> correct. 
> The result of fsck is not correct either.
> {code}
> $ hdfs storagepolicies -setStoragePolicy -path /tmp/cold  -policy COLD
> Set storage policy COLD on /tmp/cold
> $ hdfs storagepolicies -getStoragePolicy -path /tmp/cold
> The storage policy of /tmp/cold is unspecified
> $ hdfs fsck -storagepolicies /tmp/cold
> Blocks NOT satisfying the specified storage policy:
> Storage Policy  Specified Storage Policy  # of blocks 
>   % of blocks
> ARCHIVE:4(COLD) HOT   5   
>55.5556%
> ARCHIVE:3(COLD) HOT   4   
>44.%
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-24 Thread Santhosh G Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Santhosh G Nayak updated HDFS-8180:
---
Attachment: HDFS-8180-3.patch

Thanks [~jghoman] for reviewing.
Updated the patch as per the review comments. 

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8798) Erasure Coding: fix DFSStripedInputStream/DFSStripedOutputStream re-fetch token when expired

2015-07-24 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640919#comment-14640919
 ] 

Jing Zhao commented on HDFS-8798:
-

+1. I will commit the patch shortly.

> Erasure Coding: fix DFSStripedInputStream/DFSStripedOutputStream re-fetch 
> token when expired
> 
>
> Key: HDFS-8798
> URL: https://issues.apache.org/jira/browse/HDFS-8798
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-8798-HDFS-7285.02.patch, HDFS-8798.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8805) Archival Storage: getStoragePolicy should not need superuser privilege

2015-07-24 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640906#comment-14640906
 ] 

Brahma Reddy Battula commented on HDFS-8805:


Resubmitted the patch to run jenkin's again..

> Archival Storage: getStoragePolicy should not need superuser privilege
> --
>
> Key: HDFS-8805
> URL: https://issues.apache.org/jira/browse/HDFS-8805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, namenode
>Reporter: Hui Zheng
>Assignee: Brahma Reddy Battula
> Fix For: 2.6.0
>
> Attachments: HDFS-8805-002.patch, HDFS-8805.patch
>
>
> The result of getStoragePolicy command is always 'unspecified' even we has 
> set a StoragePolicy on a directory.But the real placement of blocks is 
> correct. 
> The result of fsck is not correct either.
> {code}
> $ hdfs storagepolicies -setStoragePolicy -path /tmp/cold  -policy COLD
> Set storage policy COLD on /tmp/cold
> $ hdfs storagepolicies -getStoragePolicy -path /tmp/cold
> The storage policy of /tmp/cold is unspecified
> $ hdfs fsck -storagepolicies /tmp/cold
> Blocks NOT satisfying the specified storage policy:
> Storage Policy  Specified Storage Policy  # of blocks 
>   % of blocks
> ARCHIVE:4(COLD) HOT   5   
>55.5556%
> ARCHIVE:3(COLD) HOT   4   
>44.%
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8805) Archival Storage: getStoragePolicy should not need superuser privilege

2015-07-24 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8805:
---
Attachment: (was: HDFS-8805-002.patch)

> Archival Storage: getStoragePolicy should not need superuser privilege
> --
>
> Key: HDFS-8805
> URL: https://issues.apache.org/jira/browse/HDFS-8805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, namenode
>Reporter: Hui Zheng
>Assignee: Brahma Reddy Battula
> Fix For: 2.6.0
>
> Attachments: HDFS-8805-002.patch, HDFS-8805.patch
>
>
> The result of getStoragePolicy command is always 'unspecified' even we has 
> set a StoragePolicy on a directory.But the real placement of blocks is 
> correct. 
> The result of fsck is not correct either.
> {code}
> $ hdfs storagepolicies -setStoragePolicy -path /tmp/cold  -policy COLD
> Set storage policy COLD on /tmp/cold
> $ hdfs storagepolicies -getStoragePolicy -path /tmp/cold
> The storage policy of /tmp/cold is unspecified
> $ hdfs fsck -storagepolicies /tmp/cold
> Blocks NOT satisfying the specified storage policy:
> Storage Policy  Specified Storage Policy  # of blocks 
>   % of blocks
> ARCHIVE:4(COLD) HOT   5   
>55.5556%
> ARCHIVE:3(COLD) HOT   4   
>44.%
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8805) Archival Storage: getStoragePolicy should not need superuser privilege

2015-07-24 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8805:
---
Attachment: HDFS-8805-002.patch

> Archival Storage: getStoragePolicy should not need superuser privilege
> --
>
> Key: HDFS-8805
> URL: https://issues.apache.org/jira/browse/HDFS-8805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, namenode
>Reporter: Hui Zheng
>Assignee: Brahma Reddy Battula
> Fix For: 2.6.0
>
> Attachments: HDFS-8805-002.patch, HDFS-8805.patch
>
>
> The result of getStoragePolicy command is always 'unspecified' even we has 
> set a StoragePolicy on a directory.But the real placement of blocks is 
> correct. 
> The result of fsck is not correct either.
> {code}
> $ hdfs storagepolicies -setStoragePolicy -path /tmp/cold  -policy COLD
> Set storage policy COLD on /tmp/cold
> $ hdfs storagepolicies -getStoragePolicy -path /tmp/cold
> The storage policy of /tmp/cold is unspecified
> $ hdfs fsck -storagepolicies /tmp/cold
> Blocks NOT satisfying the specified storage policy:
> Storage Policy  Specified Storage Policy  # of blocks 
>   % of blocks
> ARCHIVE:4(COLD) HOT   5   
>55.5556%
> ARCHIVE:3(COLD) HOT   4   
>44.%
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8695) OzoneHandler : Add Bucket REST Interface

2015-07-24 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640896#comment-14640896
 ] 

Anu Engineer commented on HDFS-8695:


[~kanaka] Thanks for your detail review, really appreciate it. I have made 
modifications to code and I will soon update the patch.


bq. 1) BucketHandler#createBucket(): parsing remove ACLs is not required, 
infact should result in error for unexpected parameter

good catch, I have added a boolean param to getAcls so we can ask getAcls to 
parse for remove or not. if remove acls is found when
we don't expect it we will throw an exception.

bq. 2) BucketHandler#upadteBucket(): As per Design specification, right now no 
plan to support Bucket quota. But BucketArgs extends VolumeArgs so, OzoneQuota 
parameter in request can be invalidated as UnSupportedOperation for now

All data that comes into these handlers come from HTTP headers. Right now, we 
ignore all headers that we don't recognize -- (Eventually we will throw for 
headers that we don't recognize; something that should be done in 
BucketProcessTemplate). Since we don't parse that HTTP header that represents 
Quota in the bucket path the handler code will never see the quota as an 
argument.


bq. Also, can you update javadoc to describe current supported operation during 
update Bucket so that we can validate the interface contract in implementation

Thanks, I have updated the Java doc.

bq. A new BUCKET_NOT_FOUND error needs to be added in ErrorTable

You are *absolutely* right, please wait until my next patch that adds local 
storage handling of buckets. This patch acts as a pass through to underlying 
layers that actually does the work. There were a bunch of review comments 
earlier about not bringing in constants until they are used. So just following 
the standard pattern here.

bq. BucketHandler#listBucket : as per design document, I think 
OZONE_LIST_QUERY_BUCKET should list the buckets in a volume and supposed to be 
handled in VolumeHandler#getVolumeInfo(...) (There is already a TODO to handle 
this)

I am actually open to suggestions on this. let me explain what is happening 
here. You have 2 operations on Volumes and buckets. 

* You want to get info about the volume / bucket
* You want to get list of child objects in the volume / bucket

In the case you send in a query like ?info=bucket to a volume object, it 
interprets it as it needs a list of all child objects.

if you send the a query like ?info=bucket to a bucket object, it  interprets it 
as you would like to learn the meta-data about that bucket.

if you look at the code that is volumeHandler, you will see we call into 
{{getBucketsInVolume}} and in the bucketHandler we call into 
{{getBucketInfoResponse}}. 

[I have a TODO to document these protocols in greater depth]

bq.6) BucketHandler#listBucket: Can you remove javadoc on doProcess() to 
improve readability as its already documented in BucketProcessTemplate

done

bq. If so, BucketHandler.#listBucket: OZONE_LIST_QUERY_BUCKET has to be 
replaced with OZONE_LIST_QUERY_SERVICE for getting getBucketInfoResponse()

As I said I need to document the protocol. However this is brief summary, of 
how the info key works against objects in ozone.

|| ||?info=service||?info=volume||?info=bucket||?info=key||
|volume|list volumes|info volumes|list buckets| N/A|
|bucket|N/A|N/A|info bucket| list keys|
|Key|N/A|N/A|N/A|info key|

In other words

With reference to volumes :
?info=service - list the volumes owner by a user or if you are an admin for the 
requested user.

?info=volume - Metadata about the volume, including this like Quota, who the 
Owner is etc.etc.

?info=bucket - list of all buckets in a volume.

?info=key - Invalid on Volumes


With reference to buckets :
?info=service - Invalid

?info=volume - Invalid

?info=bucket - metadata about the bucket

?info=key - list of keys


With reference to keys :
?info=service - Invalid

?info=volume - Invalid

?info=bucket - invalid

?info=key - metadata about a specific key

Please let me know if this makes sense or if you would like to see more 
clarifications.









> OzoneHandler : Add Bucket REST Interface
> 
>
> Key: HDFS-8695
> URL: https://issues.apache.org/jira/browse/HDFS-8695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8695-HDFS-7240.001.patch
>
>
> Add Bucket REST interface into Ozone server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8805) Archival Storage: getStoragePolicy should not need superuser privilege

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640883#comment-14640883
 ] 

Hadoop QA commented on HDFS-8805:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m  8s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   1m 36s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747074/HDFS-8805-002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f8f6091 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11831/console |


This message was automatically generated.

> Archival Storage: getStoragePolicy should not need superuser privilege
> --
>
> Key: HDFS-8805
> URL: https://issues.apache.org/jira/browse/HDFS-8805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, namenode
>Reporter: Hui Zheng
>Assignee: Brahma Reddy Battula
> Fix For: 2.6.0
>
> Attachments: HDFS-8805-002.patch, HDFS-8805.patch
>
>
> The result of getStoragePolicy command is always 'unspecified' even we has 
> set a StoragePolicy on a directory.But the real placement of blocks is 
> correct. 
> The result of fsck is not correct either.
> {code}
> $ hdfs storagepolicies -setStoragePolicy -path /tmp/cold  -policy COLD
> Set storage policy COLD on /tmp/cold
> $ hdfs storagepolicies -getStoragePolicy -path /tmp/cold
> The storage policy of /tmp/cold is unspecified
> $ hdfs fsck -storagepolicies /tmp/cold
> Blocks NOT satisfying the specified storage policy:
> Storage Policy  Specified Storage Policy  # of blocks 
>   % of blocks
> ARCHIVE:4(COLD) HOT   5   
>55.5556%
> ARCHIVE:3(COLD) HOT   4   
>44.%
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8665) Fix replication check in DFSTestUtils#waitForReplication

2015-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-8665:
-

Assignee: Andrew Wang  (was: JerryShao)

> Fix replication check in DFSTestUtils#waitForReplication
> 
>
> Key: HDFS-8665
> URL: https://issues.apache.org/jira/browse/HDFS-8665
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: hdfs-8665.001.patch
>
>
> The check looks at the repl factor set on the file rather than reported # of 
> replica locations. Let's do the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7858) Improve HA Namenode Failover detection on the client

2015-07-24 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640857#comment-14640857
 ] 

Jing Zhao commented on HDFS-7858:
-

Thanks again for updating the patch, [~asuresh]! Some minor comments on the 
latest patch:
# Do we need the latch in {{RequestHedgingInvocationHandler#invoke}}?
# I'm not sure if we need requestTimeout. Client/DN now already sets their 
specific socket timeout for their connection to NameNode thus it seems 
redundant to have an extra 2 min timeout when polling the CompletionService.
# We can use the ExecutionException thrown by {{callResultFuture.get()}} to get 
the exception thrown by the invocation.
# Maybe we should use debug/trace here?
{code}
+LOG.info("Invocation successful on ["
++ callResultFuture.get().name + "]");
{code}


> Improve HA Namenode Failover detection on the client
> 
>
> Key: HDFS-7858
> URL: https://issues.apache.org/jira/browse/HDFS-7858
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7858.1.patch, HDFS-7858.2.patch, HDFS-7858.2.patch, 
> HDFS-7858.3.patch, HDFS-7858.4.patch, HDFS-7858.5.patch, HDFS-7858.6.patch, 
> HDFS-7858.7.patch, HDFS-7858.8.patch, HDFS-7858.9.patch
>
>
> In an HA deployment, Clients are configured with the hostnames of both the 
> Active and Standby Namenodes.Clients will first try one of the NNs 
> (non-deterministically) and if its a standby NN, then it will respond to the 
> client to retry the request on the other Namenode.
> If the client happens to talks to the Standby first, and the standby is 
> undergoing some GC / is busy, then those clients might not get a response 
> soon enough to try the other NN.
> Proposed Approach to solve this :
> 1) Since Zookeeper is already used as the failover controller, the clients 
> could talk to ZK and find out which is the active namenode before contacting 
> it.
> 2) Long-lived DFSClients would have a ZK watch configured which fires when 
> there is a failover so they do not have to query ZK everytime to find out the 
> active NN
> 2) Clients can also cache the last active NN in the user's home directory 
> (~/.lastNN) so that short-lived clients can try that Namenode first before 
> querying ZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8499) Refactor BlockInfo class hierarchy with static helper class

2015-07-24 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640856#comment-14640856
 ] 

Zhe Zhang commented on HDFS-8499:
-

[~szetszwo]: As mentioned above I'm open to reworking the {{BlockInfo}} 
structure to resolve the discrepancy and unblock EC branch rebasing.

In the reworked structure, is your main requirement among the following? I 
summarized from the comments. Apologies if I missed anything.
* Being able to cast (and {{instanceof}}) between {{BIContiguous}} / 
{{BIContiguousUC}} and {{BIStriped}} / {{BIStripedUC}}
* Sharing code between {{BIContiguous}} / {{BIContiguousUC}} and {{BIStriped}} 
/ {{BIStripedUC}}
* Flexibility to implement the two UC classes differently. (BTW, how about 
{{BIContiguous}} and {{BIStriped}}? Do you think we should maintain the 
flexibility for completely different implementations?).

I'm more concerned about other NN modules relying on type reflections and 
casting in this multi-inheritance scenario (first item on the above list). So 
I'm OK if you'd like to change to design [#2 | 
https://issues.apache.org/jira/browse/HDFS-8499?focusedCommentId=14632040&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14632040]
 as an intermediate solution. 

If we do that, I still prefer to refactor NN code to replace 
{{BlockInfoStriped}} type casting with explicit {{isStriped()}} checks and move 
some getters up to {{BlockInfo}} level. 

> Refactor BlockInfo class hierarchy with static helper class
> ---
>
> Key: HDFS-8499
> URL: https://issues.apache.org/jira/browse/HDFS-8499
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8499.00.patch, HDFS-8499.01.patch, 
> HDFS-8499.02.patch, HDFS-8499.03.patch, HDFS-8499.04.patch, 
> HDFS-8499.05.patch, HDFS-8499.06.patch, HDFS-8499.07.patch, 
> HDFS-8499.UCFeature.patch, HDFS-bistriped.patch
>
>
> In HDFS-7285 branch, the {{BlockInfoUnderConstruction}} interface provides a 
> common abstraction for striped and contiguous UC blocks. This JIRA aims to 
> merge it to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8805) Archival Storage: getStoragePolicy should not need superuser privilege

2015-07-24 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640839#comment-14640839
 ] 

Brahma Reddy Battula commented on HDFS-8805:


[~jingzhao] thanks a lot for your review..Updated the patch based on your 
comments kindly review..

> Archival Storage: getStoragePolicy should not need superuser privilege
> --
>
> Key: HDFS-8805
> URL: https://issues.apache.org/jira/browse/HDFS-8805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, namenode
>Reporter: Hui Zheng
>Assignee: Brahma Reddy Battula
> Fix For: 2.6.0
>
> Attachments: HDFS-8805-002.patch, HDFS-8805.patch
>
>
> The result of getStoragePolicy command is always 'unspecified' even we has 
> set a StoragePolicy on a directory.But the real placement of blocks is 
> correct. 
> The result of fsck is not correct either.
> {code}
> $ hdfs storagepolicies -setStoragePolicy -path /tmp/cold  -policy COLD
> Set storage policy COLD on /tmp/cold
> $ hdfs storagepolicies -getStoragePolicy -path /tmp/cold
> The storage policy of /tmp/cold is unspecified
> $ hdfs fsck -storagepolicies /tmp/cold
> Blocks NOT satisfying the specified storage policy:
> Storage Policy  Specified Storage Policy  # of blocks 
>   % of blocks
> ARCHIVE:4(COLD) HOT   5   
>55.5556%
> ARCHIVE:3(COLD) HOT   4   
>44.%
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8805) Archival Storage: getStoragePolicy should not need superuser privilege

2015-07-24 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8805:
---
Attachment: HDFS-8805-002.patch

> Archival Storage: getStoragePolicy should not need superuser privilege
> --
>
> Key: HDFS-8805
> URL: https://issues.apache.org/jira/browse/HDFS-8805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, namenode
>Reporter: Hui Zheng
>Assignee: Brahma Reddy Battula
> Fix For: 2.6.0
>
> Attachments: HDFS-8805-002.patch, HDFS-8805.patch
>
>
> The result of getStoragePolicy command is always 'unspecified' even we has 
> set a StoragePolicy on a directory.But the real placement of blocks is 
> correct. 
> The result of fsck is not correct either.
> {code}
> $ hdfs storagepolicies -setStoragePolicy -path /tmp/cold  -policy COLD
> Set storage policy COLD on /tmp/cold
> $ hdfs storagepolicies -getStoragePolicy -path /tmp/cold
> The storage policy of /tmp/cold is unspecified
> $ hdfs fsck -storagepolicies /tmp/cold
> Blocks NOT satisfying the specified storage policy:
> Storage Policy  Specified Storage Policy  # of blocks 
>   % of blocks
> ARCHIVE:4(COLD) HOT   5   
>55.5556%
> ARCHIVE:3(COLD) HOT   4   
>44.%
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8735) Inotify : All events classes should implement toString() API.

2015-07-24 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640829#comment-14640829
 ] 

Surendra Singh Lilhore commented on HDFS-8735:
--

Thanks [~ajisakaa] for review and commit..

> Inotify : All events classes should implement toString() API.
> -
>
> Key: HDFS-8735
> URL: https://issues.apache.org/jira/browse/HDFS-8735
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-8735-002.patch, HDFS-8735-003.patch, 
> HDFS-8735-004.patch, HDFS-8735.01.patch, HDFS-8735.patch
>
>
> Event classes is used by client, it’s good to implement toString() API.
> {code}
> for(Event event : events){
>   System.out.println(event.toString());
> }
> {code}
> This will give output like this
> {code}
> org.apache.hadoop.hdfs.inotify.Event$CreateEvent@6916d97d
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8735) Inotify : All events classes should implement toString() API.

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640823#comment-14640823
 ] 

Hudson commented on HDFS-8735:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8216 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8216/])
HDFS-8735. Inotify: All events classes should implement toString() API. 
Contributed by Surendra Singh Lilhore. (aajisaka: rev 
f8f60918230dd466ae8dda1fbc28878e19273232)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java


> Inotify : All events classes should implement toString() API.
> -
>
> Key: HDFS-8735
> URL: https://issues.apache.org/jira/browse/HDFS-8735
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-8735-002.patch, HDFS-8735-003.patch, 
> HDFS-8735-004.patch, HDFS-8735.01.patch, HDFS-8735.patch
>
>
> Event classes is used by client, it’s good to implement toString() API.
> {code}
> for(Event event : events){
>   System.out.println(event.toString());
> }
> {code}
> This will give output like this
> {code}
> org.apache.hadoop.hdfs.inotify.Event$CreateEvent@6916d97d
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8735) Inotify : All events classes should implement toString() API.

2015-07-24 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8735:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~surendrasingh] and [~cmccabe] 
for the contribution.

> Inotify : All events classes should implement toString() API.
> -
>
> Key: HDFS-8735
> URL: https://issues.apache.org/jira/browse/HDFS-8735
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-8735-002.patch, HDFS-8735-003.patch, 
> HDFS-8735-004.patch, HDFS-8735.01.patch, HDFS-8735.patch
>
>
> Event classes is used by client, it’s good to implement toString() API.
> {code}
> for(Event event : events){
>   System.out.println(event.toString());
> }
> {code}
> This will give output like this
> {code}
> org.apache.hadoop.hdfs.inotify.Event$CreateEvent@6916d97d
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8769) Erasure Coding: unit test for SequentialBlockGroupIdGenerator

2015-07-24 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640811#comment-14640811
 ] 

Rakesh R commented on HDFS-8769:


Thanks [~walter.k.su] for explanation. Agreed and I'm including unit test case 
to cover the same. I have tried different approach by replacing 
{{SequentialBlockIdGenerator}} with a spy. Please review the patch again when 
you get a chance. Thanks!

> Erasure Coding: unit test for SequentialBlockGroupIdGenerator
> -
>
> Key: HDFS-8769
> URL: https://issues.apache.org/jira/browse/HDFS-8769
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Rakesh R
> Attachments: HDFS-8769-HDFS-7285-00.patch, 
> HDFS-8769-HDFS-7285-01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8735) Inotify : All events classes should implement toString() API.

2015-07-24 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640810#comment-14640810
 ] 

Akira AJISAKA commented on HDFS-8735:
-

+1, the test failures looks unrelated to the patch. The tests passed locally.

> Inotify : All events classes should implement toString() API.
> -
>
> Key: HDFS-8735
> URL: https://issues.apache.org/jira/browse/HDFS-8735
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8735-002.patch, HDFS-8735-003.patch, 
> HDFS-8735-004.patch, HDFS-8735.01.patch, HDFS-8735.patch
>
>
> Event classes is used by client, it’s good to implement toString() API.
> {code}
> for(Event event : events){
>   System.out.println(event.toString());
> }
> {code}
> This will give output like this
> {code}
> org.apache.hadoop.hdfs.inotify.Event$CreateEvent@6916d97d
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8769) Erasure Coding: unit test for SequentialBlockGroupIdGenerator

2015-07-24 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8769:
---
Attachment: HDFS-8769-HDFS-7285-01.patch

> Erasure Coding: unit test for SequentialBlockGroupIdGenerator
> -
>
> Key: HDFS-8769
> URL: https://issues.apache.org/jira/browse/HDFS-8769
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Rakesh R
> Attachments: HDFS-8769-HDFS-7285-00.patch, 
> HDFS-8769-HDFS-7285-01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8805) Archival Storage: getStoragePolicy should not need superuser privilege

2015-07-24 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640787#comment-14640787
 ] 

Jing Zhao commented on HDFS-8805:
-

Thanks for working on this, [~brahmareddy]. The patch looks good to me overall. 
Some minor comments:
# I think we also need to update {{getFileInfo(FSDirectory, String, boolean)}} 
where we do not need to check {{isSuperUser}} anymore. 
# Also the parameter {{includeStoragePolicy}} can be removed from 
{{getFileInfo(FSDirectory, String, boolean, boolean, boolean)}}.
# nitty pick: it may be more natural to have "i.isSymlink() ? 
HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED : i.getStoragePolicyID()" in 
the following code.
{code}
+  byte policyId =
+  !i.isSymlink() ? i.getStoragePolicyID()
+  : HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED;
{code}

> Archival Storage: getStoragePolicy should not need superuser privilege
> --
>
> Key: HDFS-8805
> URL: https://issues.apache.org/jira/browse/HDFS-8805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, namenode
>Reporter: Hui Zheng
>Assignee: Brahma Reddy Battula
> Fix For: 2.6.0
>
> Attachments: HDFS-8805.patch
>
>
> The result of getStoragePolicy command is always 'unspecified' even we has 
> set a StoragePolicy on a directory.But the real placement of blocks is 
> correct. 
> The result of fsck is not correct either.
> {code}
> $ hdfs storagepolicies -setStoragePolicy -path /tmp/cold  -policy COLD
> Set storage policy COLD on /tmp/cold
> $ hdfs storagepolicies -getStoragePolicy -path /tmp/cold
> The storage policy of /tmp/cold is unspecified
> $ hdfs fsck -storagepolicies /tmp/cold
> Blocks NOT satisfying the specified storage policy:
> Storage Policy  Specified Storage Policy  # of blocks 
>   % of blocks
> ARCHIVE:4(COLD) HOT   5   
>55.5556%
> ARCHIVE:3(COLD) HOT   4   
>44.%
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8805) Archival Storage: getStoragePolicy should not need superuser privilege

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640782#comment-14640782
 ] 

Hadoop QA commented on HDFS-8805:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m  0s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 52s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 12s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 45s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 40s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 10s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 161m 44s | Tests failed in hadoop-hdfs. |
| | | 205m  0s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.server.datanode.TestDatanodeProtocolRetryPolicy |
|   | hadoop.hdfs.TestDistributedFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747027/HDFS-8805.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 206d493 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11829/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11829/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11829/console |


This message was automatically generated.

> Archival Storage: getStoragePolicy should not need superuser privilege
> --
>
> Key: HDFS-8805
> URL: https://issues.apache.org/jira/browse/HDFS-8805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, namenode
>Reporter: Hui Zheng
>Assignee: Brahma Reddy Battula
> Fix For: 2.6.0
>
> Attachments: HDFS-8805.patch
>
>
> The result of getStoragePolicy command is always 'unspecified' even we has 
> set a StoragePolicy on a directory.But the real placement of blocks is 
> correct. 
> The result of fsck is not correct either.
> {code}
> $ hdfs storagepolicies -setStoragePolicy -path /tmp/cold  -policy COLD
> Set storage policy COLD on /tmp/cold
> $ hdfs storagepolicies -getStoragePolicy -path /tmp/cold
> The storage policy of /tmp/cold is unspecified
> $ hdfs fsck -storagepolicies /tmp/cold
> Blocks NOT satisfying the specified storage policy:
> Storage Policy  Specified Storage Policy  # of blocks 
>   % of blocks
> ARCHIVE:4(COLD) HOT   5   
>55.5556%
> ARCHIVE:3(COLD) HOT   4   
>44.%
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8816) Improve visualization for the Datanode tab in the NN UI

2015-07-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640780#comment-14640780
 ] 

Haohui Mai commented on HDFS-8816:
--

[~raviprak], does the latest patch looks good to you? I'd like to move forward 
so that I can move forward with HDFS-6407 as well. Thanks!

> Improve visualization for the Datanode tab in the NN UI
> ---
>
> Key: HDFS-8816
> URL: https://issues.apache.org/jira/browse/HDFS-8816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8816.000.patch, HDFS-8816.001.patch, 
> HDFS-8816.002.patch, Screen Shot 2015-07-23 at 10.24.24 AM.png
>
>
> The information of the datanode tab in the NN UI is clogged. This jira 
> proposes to improve the visualization of the datanode tab in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8816) Improve visualization for the Datanode tab in the NN UI

2015-07-24 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640776#comment-14640776
 ] 

Jing Zhao commented on HDFS-8816:
-

The latest patch looks good to me. +1.

> Improve visualization for the Datanode tab in the NN UI
> ---
>
> Key: HDFS-8816
> URL: https://issues.apache.org/jira/browse/HDFS-8816
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8816.000.patch, HDFS-8816.001.patch, 
> HDFS-8816.002.patch, Screen Shot 2015-07-23 at 10.24.24 AM.png
>
>
> The information of the datanode tab in the NN UI is clogged. This jira 
> proposes to improve the visualization of the datanode tab in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8804) Erasure Coding: use DirectBufferPool in DFSStripedInputStream for buffer allocation

2015-07-24 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640768#comment-14640768
 ] 

Jing Zhao commented on HDFS-8804:
-

Both {{duplicate}} and {{slice}} only play with the limit/position and the 
content of the buffer is shared. So calling {{duplicate}} makes sure we do not 
need to change the limit/position of the original buffer directly.

> Erasure Coding: use DirectBufferPool in DFSStripedInputStream for buffer 
> allocation
> ---
>
> Key: HDFS-8804
> URL: https://issues.apache.org/jira/browse/HDFS-8804
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-8804.000.patch
>
>
> Currently we directly allocate direct ByteBuffer in DFSStripedInputstream for 
> the stripe buffer and the buffers holding parity data. It's better to get 
> ByteBuffer from DirectBufferPool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8806) Inconsistent metrics: number of missing blocks with replication factor 1 not properly cleared

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640689#comment-14640689
 ] 

Hudson commented on HDFS-8806:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #263 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/263/])
HDFS-8806. Inconsistent metrics: number of missing blocks with replication 
factor 1 not properly cleared. Contributed by Zhe Zhang. (aajisaka: rev 
206d4933a567147b62f463c2daa3d063ad40822b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Inconsistent metrics: number of missing blocks with replication factor 1 not 
> properly cleared
> -
>
> Key: HDFS-8806
> URL: https://issues.apache.org/jira/browse/HDFS-8806
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: metrics
> Fix For: 2.7.2
>
> Attachments: HDFS-8806.00.patch, HDFS-8806.01.patch, 
> HDFS-8806.02.patch
>
>
> HDFS-7165 introduced a new metric for _number of missing blocks with 
> replication factor 1_. It is maintained as 
> {{UnderReplicatedBlocks#corruptReplOneBlocks}}. However, that variable is not 
> reset when other {{UnderReplicatedBlocks}} are cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640691#comment-14640691
 ] 

Hudson commented on HDFS-6682:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #263 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/263/])
HDFS-6682. Add a metric to expose the timestamp of the oldest under-replicated 
block. (aajisaka) (aajisaka: rev 02c01815eca656814febcdaca6115e5f53b9c746)
* hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> Add a metric to expose the timestamp of the oldest under-replicated block
> -
>
> Key: HDFS-6682
> URL: https://issues.apache.org/jira/browse/HDFS-6682
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: metrics
> Fix For: 2.8.0
>
> Attachments: HDFS-6682.002.patch, HDFS-6682.003.patch, 
> HDFS-6682.004.patch, HDFS-6682.005.patch, HDFS-6682.006.patch, HDFS-6682.patch
>
>
> In the following case, the data in the HDFS is lost and a client needs to put 
> the same file again.
> # A Client puts a file to HDFS
> # A DataNode crashes before replicating a block of the file to other DataNodes
> I propose a metric to expose the timestamp of the oldest 
> under-replicated/corrupt block. That way client can know what file to retain 
> for the re-try.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8730) Clean up the import statements in ClientProtocol

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640686#comment-14640686
 ] 

Hudson commented on HDFS-8730:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #263 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/263/])
HDFS-8730. Clean up the import statements in ClientProtocol. Contributed by 
Takanobu Asanuma. (wheat9: rev 813cf89bb56ad1a48b35fd44644d63540e8fa7d1)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java


> Clean up the import statements in ClientProtocol
> 
>
> Key: HDFS-8730
> URL: https://issues.apache.org/jira/browse/HDFS-8730
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8730.0.patch
>
>
> There are some checkstyle warnings generated by HDFS-8620 in ClientProtocol. 
> They were about the import statements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8815) DFS getStoragePolicy implementation using single RPC call

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640655#comment-14640655
 ] 

Hadoop QA commented on HDFS-8815:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m  2s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 15s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 18s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 36s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 40s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 160m 49s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 29s | Tests passed in 
hadoop-hdfs-client. |
| | | 212m 53s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestStandbyIsHot |
|   | hadoop.hdfs.TestDistributedFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747014/HDFS-8815-001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 206d493 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11828/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11828/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11828/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11828/console |


This message was automatically generated.

> DFS getStoragePolicy implementation using single RPC call
> -
>
> Key: HDFS-8815
> URL: https://issues.apache.org/jira/browse/HDFS-8815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8815-001.patch
>
>
> HADOOP-12161 introduced a new {{FileSystem#getStoragePolicy}} call. The DFS 
> implementation of the call requires two RPC calls, the first to fetch the 
> storage policy ID and the second to fetch the policy suite to map the policy 
> ID to a {{BlockStoragePolicySpi}}.
> Fix the implementation to require a single RPC call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640642#comment-14640642
 ] 

Hudson commented on HDFS-6682:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2212 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2212/])
HDFS-6682. Add a metric to expose the timestamp of the oldest under-replicated 
block. (aajisaka) (aajisaka: rev 02c01815eca656814febcdaca6115e5f53b9c746)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> Add a metric to expose the timestamp of the oldest under-replicated block
> -
>
> Key: HDFS-6682
> URL: https://issues.apache.org/jira/browse/HDFS-6682
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: metrics
> Fix For: 2.8.0
>
> Attachments: HDFS-6682.002.patch, HDFS-6682.003.patch, 
> HDFS-6682.004.patch, HDFS-6682.005.patch, HDFS-6682.006.patch, HDFS-6682.patch
>
>
> In the following case, the data in the HDFS is lost and a client needs to put 
> the same file again.
> # A Client puts a file to HDFS
> # A DataNode crashes before replicating a block of the file to other DataNodes
> I propose a metric to expose the timestamp of the oldest 
> under-replicated/corrupt block. That way client can know what file to retain 
> for the re-try.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8806) Inconsistent metrics: number of missing blocks with replication factor 1 not properly cleared

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640640#comment-14640640
 ] 

Hudson commented on HDFS-8806:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2212 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2212/])
HDFS-8806. Inconsistent metrics: number of missing blocks with replication 
factor 1 not properly cleared. Contributed by Zhe Zhang. (aajisaka: rev 
206d4933a567147b62f463c2daa3d063ad40822b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Inconsistent metrics: number of missing blocks with replication factor 1 not 
> properly cleared
> -
>
> Key: HDFS-8806
> URL: https://issues.apache.org/jira/browse/HDFS-8806
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: metrics
> Fix For: 2.7.2
>
> Attachments: HDFS-8806.00.patch, HDFS-8806.01.patch, 
> HDFS-8806.02.patch
>
>
> HDFS-7165 introduced a new metric for _number of missing blocks with 
> replication factor 1_. It is maintained as 
> {{UnderReplicatedBlocks#corruptReplOneBlocks}}. However, that variable is not 
> reset when other {{UnderReplicatedBlocks}} are cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8730) Clean up the import statements in ClientProtocol

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640637#comment-14640637
 ] 

Hudson commented on HDFS-8730:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2212 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2212/])
HDFS-8730. Clean up the import statements in ClientProtocol. Contributed by 
Takanobu Asanuma. (wheat9: rev 813cf89bb56ad1a48b35fd44644d63540e8fa7d1)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java


> Clean up the import statements in ClientProtocol
> 
>
> Key: HDFS-8730
> URL: https://issues.apache.org/jira/browse/HDFS-8730
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8730.0.patch
>
>
> There are some checkstyle warnings generated by HDFS-8620 in ClientProtocol. 
> They were about the import statements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8622) Implement GETCONTENTSUMMARY operation for WebImageViewer

2015-07-24 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640619#comment-14640619
 ] 

Akira AJISAKA commented on HDFS-8622:
-

Thanks Jagadesh for the comment.

bq. do you think it will do some value add if i add code for content summary?
It will make sense to me because we cannot expect when to enable the feature 
again. It would be better to prepare the code to support symlink.

> Implement GETCONTENTSUMMARY operation for WebImageViewer
> 
>
> Key: HDFS-8622
> URL: https://issues.apache.org/jira/browse/HDFS-8622
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-8622-00.patch, HDFS-8622-01.patch, 
> HDFS-8622-02.patch, HDFS-8622-03.patch, HDFS-8622-04.patch
>
>
>  it would be better for administrators if {code} GETCONTENTSUMMARY {code} are 
> supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8806) Inconsistent metrics: number of missing blocks with replication factor 1 not properly cleared

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640564#comment-14640564
 ] 

Hudson commented on HDFS-8806:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #255 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/255/])
HDFS-8806. Inconsistent metrics: number of missing blocks with replication 
factor 1 not properly cleared. Contributed by Zhe Zhang. (aajisaka: rev 
206d4933a567147b62f463c2daa3d063ad40822b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Inconsistent metrics: number of missing blocks with replication factor 1 not 
> properly cleared
> -
>
> Key: HDFS-8806
> URL: https://issues.apache.org/jira/browse/HDFS-8806
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: metrics
> Fix For: 2.7.2
>
> Attachments: HDFS-8806.00.patch, HDFS-8806.01.patch, 
> HDFS-8806.02.patch
>
>
> HDFS-7165 introduced a new metric for _number of missing blocks with 
> replication factor 1_. It is maintained as 
> {{UnderReplicatedBlocks#corruptReplOneBlocks}}. However, that variable is not 
> reset when other {{UnderReplicatedBlocks}} are cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8730) Clean up the import statements in ClientProtocol

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640561#comment-14640561
 ] 

Hudson commented on HDFS-8730:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #255 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/255/])
HDFS-8730. Clean up the import statements in ClientProtocol. Contributed by 
Takanobu Asanuma. (wheat9: rev 813cf89bb56ad1a48b35fd44644d63540e8fa7d1)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java


> Clean up the import statements in ClientProtocol
> 
>
> Key: HDFS-8730
> URL: https://issues.apache.org/jira/browse/HDFS-8730
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8730.0.patch
>
>
> There are some checkstyle warnings generated by HDFS-8620 in ClientProtocol. 
> They were about the import statements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640566#comment-14640566
 ] 

Hudson commented on HDFS-6682:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #255 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/255/])
HDFS-6682. Add a metric to expose the timestamp of the oldest under-replicated 
block. (aajisaka) (aajisaka: rev 02c01815eca656814febcdaca6115e5f53b9c746)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Add a metric to expose the timestamp of the oldest under-replicated block
> -
>
> Key: HDFS-6682
> URL: https://issues.apache.org/jira/browse/HDFS-6682
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: metrics
> Fix For: 2.8.0
>
> Attachments: HDFS-6682.002.patch, HDFS-6682.003.patch, 
> HDFS-6682.004.patch, HDFS-6682.005.patch, HDFS-6682.006.patch, HDFS-6682.patch
>
>
> In the following case, the data in the HDFS is lost and a client needs to put 
> the same file again.
> # A Client puts a file to HDFS
> # A DataNode crashes before replicating a block of the file to other DataNodes
> I propose a metric to expose the timestamp of the oldest 
> under-replicated/corrupt block. That way client can know what file to retain 
> for the re-try.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8806) Inconsistent metrics: number of missing blocks with replication factor 1 not properly cleared

2015-07-24 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640540#comment-14640540
 ] 

Zhe Zhang commented on HDFS-8806:
-

Thanks Akira for reviewing and committing the patch!

> Inconsistent metrics: number of missing blocks with replication factor 1 not 
> properly cleared
> -
>
> Key: HDFS-8806
> URL: https://issues.apache.org/jira/browse/HDFS-8806
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: metrics
> Fix For: 2.7.2
>
> Attachments: HDFS-8806.00.patch, HDFS-8806.01.patch, 
> HDFS-8806.02.patch
>
>
> HDFS-7165 introduced a new metric for _number of missing blocks with 
> replication factor 1_. It is maintained as 
> {{UnderReplicatedBlocks#corruptReplOneBlocks}}. However, that variable is not 
> reset when other {{UnderReplicatedBlocks}} are cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8806) Inconsistent metrics: number of missing blocks with replication factor 1 not properly cleared

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640534#comment-14640534
 ] 

Hudson commented on HDFS-8806:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2193 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2193/])
HDFS-8806. Inconsistent metrics: number of missing blocks with replication 
factor 1 not properly cleared. Contributed by Zhe Zhang. (aajisaka: rev 
206d4933a567147b62f463c2daa3d063ad40822b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Inconsistent metrics: number of missing blocks with replication factor 1 not 
> properly cleared
> -
>
> Key: HDFS-8806
> URL: https://issues.apache.org/jira/browse/HDFS-8806
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: metrics
> Fix For: 2.7.2
>
> Attachments: HDFS-8806.00.patch, HDFS-8806.01.patch, 
> HDFS-8806.02.patch
>
>
> HDFS-7165 introduced a new metric for _number of missing blocks with 
> replication factor 1_. It is maintained as 
> {{UnderReplicatedBlocks#corruptReplOneBlocks}}. However, that variable is not 
> reset when other {{UnderReplicatedBlocks}} are cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8730) Clean up the import statements in ClientProtocol

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640531#comment-14640531
 ] 

Hudson commented on HDFS-8730:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2193 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2193/])
HDFS-8730. Clean up the import statements in ClientProtocol. Contributed by 
Takanobu Asanuma. (wheat9: rev 813cf89bb56ad1a48b35fd44644d63540e8fa7d1)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java


> Clean up the import statements in ClientProtocol
> 
>
> Key: HDFS-8730
> URL: https://issues.apache.org/jira/browse/HDFS-8730
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8730.0.patch
>
>
> There are some checkstyle warnings generated by HDFS-8620 in ClientProtocol. 
> They were about the import statements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640536#comment-14640536
 ] 

Hudson commented on HDFS-6682:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2193 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2193/])
HDFS-6682. Add a metric to expose the timestamp of the oldest under-replicated 
block. (aajisaka) (aajisaka: rev 02c01815eca656814febcdaca6115e5f53b9c746)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java


> Add a metric to expose the timestamp of the oldest under-replicated block
> -
>
> Key: HDFS-6682
> URL: https://issues.apache.org/jira/browse/HDFS-6682
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: metrics
> Fix For: 2.8.0
>
> Attachments: HDFS-6682.002.patch, HDFS-6682.003.patch, 
> HDFS-6682.004.patch, HDFS-6682.005.patch, HDFS-6682.006.patch, HDFS-6682.patch
>
>
> In the following case, the data in the HDFS is lost and a client needs to put 
> the same file again.
> # A Client puts a file to HDFS
> # A DataNode crashes before replicating a block of the file to other DataNodes
> I propose a metric to expose the timestamp of the oldest 
> under-replicated/corrupt block. That way client can know what file to retain 
> for the re-try.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8798) Erasure Coding: fix DFSStripedInputStream/DFSStripedOutputStream re-fetch token when expired

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640520#comment-14640520
 ] 

Hadoop QA commented on HDFS-8798:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 17s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:green}+1{color} | javac |   7m 38s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 47s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 38s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 26s | The patch appears to introduce 5 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 17s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 174m 12s | Tests failed in hadoop-hdfs. |
| | | 216m 46s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12747002/HDFS-8798-HDFS-7285.02.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / c2c26e6 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11827/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11827/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11827/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11827/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11827/console |


This message was automatically generated.

> Erasure Coding: fix DFSStripedInputStream/DFSStripedOutputStream re-fetch 
> token when expired
> 
>
> Key: HDFS-8798
> URL: https://issues.apache.org/jira/browse/HDFS-8798
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-8798-HDFS-7285.02.patch, HDFS-8798.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8622) Implement GETCONTENTSUMMARY operation for WebImageViewer

2015-07-24 Thread Jagadesh Kiran N (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640508#comment-14640508
 ] 

Jagadesh Kiran N commented on HDFS-8622:


HADOOP-10052 this one also related to Symlink

> Implement GETCONTENTSUMMARY operation for WebImageViewer
> 
>
> Key: HDFS-8622
> URL: https://issues.apache.org/jira/browse/HDFS-8622
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-8622-00.patch, HDFS-8622-01.patch, 
> HDFS-8622-02.patch, HDFS-8622-03.patch, HDFS-8622-04.patch
>
>
>  it would be better for administrators if {code} GETCONTENTSUMMARY {code} are 
> supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8622) Implement GETCONTENTSUMMARY operation for WebImageViewer

2015-07-24 Thread Jagadesh Kiran N (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640487#comment-14640487
 ] 

Jagadesh Kiran N commented on HDFS-8622:


Hi [~ajisakaa] regarding Symlink ,currently they are disabled as per my 
understanding (HADOOP-10020) ,do you think it will do some value add if i add 
code for content summary?,please let me know your views so that i can go a head

> Implement GETCONTENTSUMMARY operation for WebImageViewer
> 
>
> Key: HDFS-8622
> URL: https://issues.apache.org/jira/browse/HDFS-8622
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-8622-00.patch, HDFS-8622-01.patch, 
> HDFS-8622-02.patch, HDFS-8622-03.patch, HDFS-8622-04.patch
>
>
>  it would be better for administrators if {code} GETCONTENTSUMMARY {code} are 
> supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8805) Archival Storage: getStoragePolicy should not need superuser privilege

2015-07-24 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640483#comment-14640483
 ] 

Brahma Reddy Battula commented on HDFS-8805:


@jing zhao and [~huizane] thanks a lot for your inputs..just we need remove the 
superuser check in {{FSDirStatAndListingOp#getFileInfo}}, Attached the patch 
for same.. I verfied manually,following is the result for same.Did not seen 
better way to write the testcase.

{noformat}
$ ./hdfs storagepolicies -setStoragePolicy -path /BBP -policy COLD
Set storage policy COLD on /BBP
{noformat}
 *without Patch* 
{noformat}
host1 bin# ./hdfs storagepolicies -getStoragePolicy -path /BBP
The storage policy of /BBP is unspecified
{noformat}
 *with Patch* 
{noformat}
host1 bin# ./hdfs storagepolicies -getStoragePolicy -path /BBP
The storage policy of /BBP:
BlockStoragePolicy{COLD:2, storageTypes=[ARCHIVE], creationFallbacks=[], 
replicationFallbacks=[]}
{noformat}

> Archival Storage: getStoragePolicy should not need superuser privilege
> --
>
> Key: HDFS-8805
> URL: https://issues.apache.org/jira/browse/HDFS-8805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, namenode
>Reporter: Hui Zheng
>Assignee: Brahma Reddy Battula
> Fix For: 2.6.0
>
> Attachments: HDFS-8805.patch
>
>
> The result of getStoragePolicy command is always 'unspecified' even we has 
> set a StoragePolicy on a directory.But the real placement of blocks is 
> correct. 
> The result of fsck is not correct either.
> {code}
> $ hdfs storagepolicies -setStoragePolicy -path /tmp/cold  -policy COLD
> Set storage policy COLD on /tmp/cold
> $ hdfs storagepolicies -getStoragePolicy -path /tmp/cold
> The storage policy of /tmp/cold is unspecified
> $ hdfs fsck -storagepolicies /tmp/cold
> Blocks NOT satisfying the specified storage policy:
> Storage Policy  Specified Storage Policy  # of blocks 
>   % of blocks
> ARCHIVE:4(COLD) HOT   5   
>55.5556%
> ARCHIVE:3(COLD) HOT   4   
>44.%
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8805) Archival Storage: getStoragePolicy should not need superuser privilege

2015-07-24 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8805:
---
Status: Patch Available  (was: Open)

> Archival Storage: getStoragePolicy should not need superuser privilege
> --
>
> Key: HDFS-8805
> URL: https://issues.apache.org/jira/browse/HDFS-8805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, namenode
>Reporter: Hui Zheng
>Assignee: Brahma Reddy Battula
> Fix For: 2.6.0
>
> Attachments: HDFS-8805.patch
>
>
> The result of getStoragePolicy command is always 'unspecified' even we has 
> set a StoragePolicy on a directory.But the real placement of blocks is 
> correct. 
> The result of fsck is not correct either.
> {code}
> $ hdfs storagepolicies -setStoragePolicy -path /tmp/cold  -policy COLD
> Set storage policy COLD on /tmp/cold
> $ hdfs storagepolicies -getStoragePolicy -path /tmp/cold
> The storage policy of /tmp/cold is unspecified
> $ hdfs fsck -storagepolicies /tmp/cold
> Blocks NOT satisfying the specified storage policy:
> Storage Policy  Specified Storage Policy  # of blocks 
>   % of blocks
> ARCHIVE:4(COLD) HOT   5   
>55.5556%
> ARCHIVE:3(COLD) HOT   4   
>44.%
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8805) Archival Storage: getStoragePolicy should not need superuser privilege

2015-07-24 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-8805:
---
Attachment: HDFS-8805.patch

> Archival Storage: getStoragePolicy should not need superuser privilege
> --
>
> Key: HDFS-8805
> URL: https://issues.apache.org/jira/browse/HDFS-8805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover, namenode
>Reporter: Hui Zheng
>Assignee: Brahma Reddy Battula
> Fix For: 2.6.0
>
> Attachments: HDFS-8805.patch
>
>
> The result of getStoragePolicy command is always 'unspecified' even we has 
> set a StoragePolicy on a directory.But the real placement of blocks is 
> correct. 
> The result of fsck is not correct either.
> {code}
> $ hdfs storagepolicies -setStoragePolicy -path /tmp/cold  -policy COLD
> Set storage policy COLD on /tmp/cold
> $ hdfs storagepolicies -getStoragePolicy -path /tmp/cold
> The storage policy of /tmp/cold is unspecified
> $ hdfs fsck -storagepolicies /tmp/cold
> Blocks NOT satisfying the specified storage policy:
> Storage Policy  Specified Storage Policy  # of blocks 
>   % of blocks
> ARCHIVE:4(COLD) HOT   5   
>55.5556%
> ARCHIVE:3(COLD) HOT   4   
>44.%
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8798) Erasure Coding: fix DFSStripedInputStream/DFSStripedOutputStream re-fetch token when expired

2015-07-24 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8798:

Summary: Erasure Coding: fix DFSStripedInputStream/DFSStripedOutputStream 
re-fetch token when expired  (was: Erasure Coding: fix 
DFSStripedInputStream/OuputStream re-fetch token when expired)

> Erasure Coding: fix DFSStripedInputStream/DFSStripedOutputStream re-fetch 
> token when expired
> 
>
> Key: HDFS-8798
> URL: https://issues.apache.org/jira/browse/HDFS-8798
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-8798-HDFS-7285.02.patch, HDFS-8798.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8695) OzoneHandler : Add Bucket REST Interface

2015-07-24 Thread kanaka kumar avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640408#comment-14640408
 ] 

kanaka kumar avvaru commented on HDFS-8695:
---

Thanks for the patch [~anu]. Few comments,

1) {{BucketHandler#createBucket()}}: parsing remove ACLs is not required, 
infact should result in error for unexpected parameter

2) {{BucketHandler#upadteBucket()}}:  As per Design specification, right now no 
plan to support Bucket quota. But BucketArgs extends VolumeArgs so, OzoneQuota 
parameter in request can be invalidated as UnSupportedOperation for now

3) Also, can you update javadoc to describe current supported operation during 
update Bucket so that we can validate the interface contract in implementation

4) A new {{BUCKET_NOT_FOUND}} error needs to be added in {{ErrorTable}}

5) {{BucketHandler#listBucket}} : as per design document, I think 
OZONE_LIST_QUERY_BUCKET should list the buckets in a volume and supposed to be 
handled in {{VolumeHandler#getVolumeInfo(...)}} (There is already a TODO to 
handle this)

6) {{BucketHandler#listBucket}}:  Can you remove javadoc on doProcess() to 
improve readability as its already documented in BucketProcessTemplate


AFAIK {{OZONE_LIST_QUERY_SERVICE}} request should list the contents of a 
container (Please correct me If not) i.e 
* Volumes for user when called on root path 
* Buckets list  when called on Volume
* Objects  list when called on Bucket


7) If so, {{BucketHandler.#listBucket}}: OZONE_LIST_QUERY_BUCKET has to be 
replaced with OZONE_LIST_QUERY_SERVICE for getting getBucketInfoResponse()   

> OzoneHandler : Add Bucket REST Interface
> 
>
> Key: HDFS-8695
> URL: https://issues.apache.org/jira/browse/HDFS-8695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8695-HDFS-7240.001.patch
>
>
> Add Bucket REST interface into Ozone server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7858) Improve HA Namenode Failover detection on the client

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640395#comment-14640395
 ] 

Hadoop QA commented on HDFS-7858:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  26m 54s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  1s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   9m 44s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 53s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 27s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m 37s | Site still builds. |
| {color:green}+1{color} | checkstyle |   2m 39s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 45s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 43s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   5m 33s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  25m 29s | Tests failed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 161m 41s | Tests failed in hadoop-hdfs. |
| | | 250m 31s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider |
|   | hadoop.hdfs.TestDistributedFileSystem |
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12746976/HDFS-7858.9.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / e202efa |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11826/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11826/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11826/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11826/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11826/console |


This message was automatically generated.

> Improve HA Namenode Failover detection on the client
> 
>
> Key: HDFS-7858
> URL: https://issues.apache.org/jira/browse/HDFS-7858
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7858.1.patch, HDFS-7858.2.patch, HDFS-7858.2.patch, 
> HDFS-7858.3.patch, HDFS-7858.4.patch, HDFS-7858.5.patch, HDFS-7858.6.patch, 
> HDFS-7858.7.patch, HDFS-7858.8.patch, HDFS-7858.9.patch
>
>
> In an HA deployment, Clients are configured with the hostnames of both the 
> Active and Standby Namenodes.Clients will first try one of the NNs 
> (non-deterministically) and if its a standby NN, then it will respond to the 
> client to retry the request on the other Namenode.
> If the client happens to talks to the Standby first, and the standby is 
> undergoing some GC / is busy, then those clients might not get a response 
> soon enough to try the other NN.
> Proposed Approach to solve this :
> 1) Since Zookeeper is already used as the failover controller, the clients 
> could talk to ZK and find out which is the active namenode before contacting 
> it.
> 2) Long-lived DFSClients would have a ZK watch configured which fires when 
> there is a failover so they do not have to query ZK everytime to find out the 
> active NN
> 2) Clients can also cache the last active NN in the user's home directory 
> (~/.lastNN) so that short-lived clients can try that Namenode first before 
> querying ZK



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8730) Clean up the import statements in ClientProtocol

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640374#comment-14640374
 ] 

Hudson commented on HDFS-8730:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #266 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/266/])
HDFS-8730. Clean up the import statements in ClientProtocol. Contributed by 
Takanobu Asanuma. (wheat9: rev 813cf89bb56ad1a48b35fd44644d63540e8fa7d1)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Clean up the import statements in ClientProtocol
> 
>
> Key: HDFS-8730
> URL: https://issues.apache.org/jira/browse/HDFS-8730
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8730.0.patch
>
>
> There are some checkstyle warnings generated by HDFS-8620 in ClientProtocol. 
> They were about the import statements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640379#comment-14640379
 ] 

Hudson commented on HDFS-6682:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #266 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/266/])
HDFS-6682. Add a metric to expose the timestamp of the oldest under-replicated 
block. (aajisaka) (aajisaka: rev 02c01815eca656814febcdaca6115e5f53b9c746)
* hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlocks.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java


> Add a metric to expose the timestamp of the oldest under-replicated block
> -
>
> Key: HDFS-6682
> URL: https://issues.apache.org/jira/browse/HDFS-6682
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: metrics
> Fix For: 2.8.0
>
> Attachments: HDFS-6682.002.patch, HDFS-6682.003.patch, 
> HDFS-6682.004.patch, HDFS-6682.005.patch, HDFS-6682.006.patch, HDFS-6682.patch
>
>
> In the following case, the data in the HDFS is lost and a client needs to put 
> the same file again.
> # A Client puts a file to HDFS
> # A DataNode crashes before replicating a block of the file to other DataNodes
> I propose a metric to expose the timestamp of the oldest 
> under-replicated/corrupt block. That way client can know what file to retain 
> for the re-try.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8806) Inconsistent metrics: number of missing blocks with replication factor 1 not properly cleared

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640377#comment-14640377
 ] 

Hudson commented on HDFS-8806:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #266 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/266/])
HDFS-8806. Inconsistent metrics: number of missing blocks with replication 
factor 1 not properly cleared. Contributed by Zhe Zhang. (aajisaka: rev 
206d4933a567147b62f463c2daa3d063ad40822b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Inconsistent metrics: number of missing blocks with replication factor 1 not 
> properly cleared
> -
>
> Key: HDFS-8806
> URL: https://issues.apache.org/jira/browse/HDFS-8806
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: metrics
> Fix For: 2.7.2
>
> Attachments: HDFS-8806.00.patch, HDFS-8806.01.patch, 
> HDFS-8806.02.patch
>
>
> HDFS-7165 introduced a new metric for _number of missing blocks with 
> replication factor 1_. It is maintained as 
> {{UnderReplicatedBlocks#corruptReplOneBlocks}}. However, that variable is not 
> reset when other {{UnderReplicatedBlocks}} are cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8815) DFS getStoragePolicy implementation using single RPC call

2015-07-24 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-8815:
-
Status: Patch Available  (was: Open)

Attached initial patch, Please review ..

I think we no need to add test case , 
{{TestBlockStoragePolicy.testSetStoragePolicy()}} is enough to test this API.

> DFS getStoragePolicy implementation using single RPC call
> -
>
> Key: HDFS-8815
> URL: https://issues.apache.org/jira/browse/HDFS-8815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8815-001.patch
>
>
> HADOOP-12161 introduced a new {{FileSystem#getStoragePolicy}} call. The DFS 
> implementation of the call requires two RPC calls, the first to fetch the 
> storage policy ID and the second to fetch the policy suite to map the policy 
> ID to a {{BlockStoragePolicySpi}}.
> Fix the implementation to require a single RPC call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8815) DFS getStoragePolicy implementation using single RPC call

2015-07-24 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-8815:
-
Attachment: HDFS-8815-001.patch

> DFS getStoragePolicy implementation using single RPC call
> -
>
> Key: HDFS-8815
> URL: https://issues.apache.org/jira/browse/HDFS-8815
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8815-001.patch
>
>
> HADOOP-12161 introduced a new {{FileSystem#getStoragePolicy}} call. The DFS 
> implementation of the call requires two RPC calls, the first to fetch the 
> storage policy ID and the second to fetch the policy suite to map the policy 
> ID to a {{BlockStoragePolicySpi}}.
> Fix the implementation to require a single RPC call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6682) Add a metric to expose the timestamp of the oldest under-replicated block

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640297#comment-14640297
 ] 

Hudson commented on HDFS-6682:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #996 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/996/])
HDFS-6682. Add a metric to expose the timestamp of the oldest under-replicated 
block. (aajisaka) (aajisaka: rev 02c01815eca656814febcdaca6115e5f53b9c746)
* hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Add a metric to expose the timestamp of the oldest under-replicated block
> -
>
> Key: HDFS-6682
> URL: https://issues.apache.org/jira/browse/HDFS-6682
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: metrics
> Fix For: 2.8.0
>
> Attachments: HDFS-6682.002.patch, HDFS-6682.003.patch, 
> HDFS-6682.004.patch, HDFS-6682.005.patch, HDFS-6682.006.patch, HDFS-6682.patch
>
>
> In the following case, the data in the HDFS is lost and a client needs to put 
> the same file again.
> # A Client puts a file to HDFS
> # A DataNode crashes before replicating a block of the file to other DataNodes
> I propose a metric to expose the timestamp of the oldest 
> under-replicated/corrupt block. That way client can know what file to retain 
> for the re-try.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8730) Clean up the import statements in ClientProtocol

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640292#comment-14640292
 ] 

Hudson commented on HDFS-8730:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #996 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/996/])
HDFS-8730. Clean up the import statements in ClientProtocol. Contributed by 
Takanobu Asanuma. (wheat9: rev 813cf89bb56ad1a48b35fd44644d63540e8fa7d1)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java


> Clean up the import statements in ClientProtocol
> 
>
> Key: HDFS-8730
> URL: https://issues.apache.org/jira/browse/HDFS-8730
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8730.0.patch
>
>
> There are some checkstyle warnings generated by HDFS-8620 in ClientProtocol. 
> They were about the import statements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8798) Erasure Coding: fix DFSStripedInputStream/OuputStream re-fetch token when expired

2015-07-24 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8798:

Summary: Erasure Coding: fix DFSStripedInputStream/OuputStream re-fetch 
token when expired  (was: Erasure Coding: fix the retry logic of 
DFSStripedInputStream)

> Erasure Coding: fix DFSStripedInputStream/OuputStream re-fetch token when 
> expired
> -
>
> Key: HDFS-8798
> URL: https://issues.apache.org/jira/browse/HDFS-8798
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-8798-HDFS-7285.02.patch, HDFS-8798.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8818) Allow Balancer to run faster

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640298#comment-14640298
 ] 

Hadoop QA commented on HDFS-8818:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  21m 53s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |  10m  6s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 43s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 27s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 43s | The applied patch generated  7 
new checkstyle issues (total was 524, now 530). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 6  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 40s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m  8s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 43s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 174m 34s | Tests failed in hadoop-hdfs. |
| | | 229m 40s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.hdfs.TestParallelShortCircuitReadUnCached |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12746966/h8818_20150723.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 02c0181 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11824/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11824/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11824/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11824/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11824/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11824/console |


This message was automatically generated.

> Allow Balancer to run faster
> 
>
> Key: HDFS-8818
> URL: https://issues.apache.org/jira/browse/HDFS-8818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8818_20150723.patch
>
>
> The original design of Balancer is intentionally to make it run slowly so 
> that the balancing activities won't affect the normal cluster activities and 
> the running jobs.
> There are new use case that cluster admin may choose to balance the cluster 
> when the cluster load is low, or in a maintain window.  So that we should 
> have an option to allow Balancer to run faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8806) Inconsistent metrics: number of missing blocks with replication factor 1 not properly cleared

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640295#comment-14640295
 ] 

Hudson commented on HDFS-8806:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #996 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/996/])
HDFS-8806. Inconsistent metrics: number of missing blocks with replication 
factor 1 not properly cleared. Contributed by Zhe Zhang. (aajisaka: rev 
206d4933a567147b62f463c2daa3d063ad40822b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java


> Inconsistent metrics: number of missing blocks with replication factor 1 not 
> properly cleared
> -
>
> Key: HDFS-8806
> URL: https://issues.apache.org/jira/browse/HDFS-8806
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: metrics
> Fix For: 2.7.2
>
> Attachments: HDFS-8806.00.patch, HDFS-8806.01.patch, 
> HDFS-8806.02.patch
>
>
> HDFS-7165 introduced a new metric for _number of missing blocks with 
> replication factor 1_. It is maintained as 
> {{UnderReplicatedBlocks#corruptReplOneBlocks}}. However, that variable is not 
> reset when other {{UnderReplicatedBlocks}} are cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8798) Erasure Coding: fix the retry logic of DFSStripedInputStream

2015-07-24 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8798:

Status: Patch Available  (was: Open)

> Erasure Coding: fix the retry logic of DFSStripedInputStream
> 
>
> Key: HDFS-8798
> URL: https://issues.apache.org/jira/browse/HDFS-8798
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-8798-HDFS-7285.02.patch, HDFS-8798.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8798) Erasure Coding: fix the retry logic of DFSStripedInputStream

2015-07-24 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8798:

Attachment: HDFS-8798-HDFS-7285.02.patch

Uploaded 02 patch. The patch does:

1. removed rename {{refreshLocatedBlock}}

2. removed retry logic for connection failure. I kept
{code}
+// re-fetch the block in case the block has been moved
+fetchBlockAt(block.getStartOffset());
{code}
because TestBlockTokenWithDFS.testRead() has a special case that restarts all 
DNs in new locations/ports. I keep the 2 lines so that the test can pass. It's 
a workaround so I add a {{TODO}}

3. fix OutputStream handling block tokens & add test.

4. add test for balancer moving striped blocks with token enabled.

> Erasure Coding: fix the retry logic of DFSStripedInputStream
> 
>
> Key: HDFS-8798
> URL: https://issues.apache.org/jira/browse/HDFS-8798
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
> Attachments: HDFS-8798-HDFS-7285.02.patch, HDFS-8798.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8735) Inotify : All events classes should implement toString() API.

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640278#comment-14640278
 ] 

Hadoop QA commented on HDFS-8735:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 15s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 48s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 47s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 54s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 22s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 27s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  8s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 159m 32s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 30s | Tests passed in 
hadoop-hdfs-client. |
| | | 209m 44s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.TestDistributedFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12746968/HDFS-8735-004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 02c0181 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11825/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11825/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11825/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11825/console |


This message was automatically generated.

> Inotify : All events classes should implement toString() API.
> -
>
> Key: HDFS-8735
> URL: https://issues.apache.org/jira/browse/HDFS-8735
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-8735-002.patch, HDFS-8735-003.patch, 
> HDFS-8735-004.patch, HDFS-8735.01.patch, HDFS-8735.patch
>
>
> Event classes is used by client, it’s good to implement toString() API.
> {code}
> for(Event event : events){
>   System.out.println(event.toString());
> }
> {code}
> This will give output like this
> {code}
> org.apache.hadoop.hdfs.inotify.Event$CreateEvent@6916d97d
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8806) Inconsistent metrics: number of missing blocks with replication factor 1 not properly cleared

2015-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640244#comment-14640244
 ] 

Hudson commented on HDFS-8806:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8212 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8212/])
HDFS-8806. Inconsistent metrics: number of missing blocks with replication 
factor 1 not properly cleared. Contributed by Zhe Zhang. (aajisaka: rev 
206d4933a567147b62f463c2daa3d063ad40822b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java


> Inconsistent metrics: number of missing blocks with replication factor 1 not 
> properly cleared
> -
>
> Key: HDFS-8806
> URL: https://issues.apache.org/jira/browse/HDFS-8806
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: metrics
> Fix For: 2.7.2
>
> Attachments: HDFS-8806.00.patch, HDFS-8806.01.patch, 
> HDFS-8806.02.patch
>
>
> HDFS-7165 introduced a new metric for _number of missing blocks with 
> replication factor 1_. It is maintained as 
> {{UnderReplicatedBlocks#corruptReplOneBlocks}}. However, that variable is not 
> reset when other {{UnderReplicatedBlocks}} are cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8802) dfs.checksum.type is not described in hdfs-default.xml

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640239#comment-14640239
 ] 

Hadoop QA commented on HDFS-8802:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 39s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 38s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 37s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 19s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | native |   3m  3s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 161m 10s | Tests failed in hadoop-hdfs. |
| | | 198m 26s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestDistributedFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12746962/HDFS-8802_02.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / 02c0181 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11822/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11822/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11822/console |


This message was automatically generated.

> dfs.checksum.type is not described in hdfs-default.xml
> --
>
> Key: HDFS-8802
> URL: https://issues.apache.org/jira/browse/HDFS-8802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Tsuyoshi Ozawa
>Assignee: Gururaj Shetty
> Attachments: HDFS-8802.patch, HDFS-8802_01.patch, HDFS-8802_02.patch
>
>
> It's a good timing to check other configurations about hdfs-default.xml here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8777) Erasure Coding: add tests for taking snapshots on EC files

2015-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14640238#comment-14640238
 ] 

Hadoop QA commented on HDFS-8777:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |   6m 27s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 16s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 12s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 40s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 47s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 41s | The patch appears to introduce 5 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m 24s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 173m  7s | Tests failed in hadoop-hdfs. |
| | | 196m 13s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12746957/HDFS-8777-HDFS-7285-00.patch
 |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | HDFS-7285 / c2c26e6 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11823/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11823/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11823/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11823/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11823/console |


This message was automatically generated.

> Erasure Coding: add tests for taking snapshots on EC files
> --
>
> Key: HDFS-8777
> URL: https://issues.apache.org/jira/browse/HDFS-8777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Rakesh R
> Attachments: HDFS-8777-HDFS-7285-00.patch
>
>
> We need to add more tests for (EC + snapshots). The tests need to verify the 
> fsimage saving/loading is correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8806) Inconsistent metrics: number of missing blocks with replication factor 1 not properly cleared

2015-07-24 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8806:

   Resolution: Fixed
Fix Version/s: 2.7.2
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, and branch-2.7. Thanks [~zhz] for the 
contribution.

> Inconsistent metrics: number of missing blocks with replication factor 1 not 
> properly cleared
> -
>
> Key: HDFS-8806
> URL: https://issues.apache.org/jira/browse/HDFS-8806
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: metrics
> Fix For: 2.7.2
>
> Attachments: HDFS-8806.00.patch, HDFS-8806.01.patch, 
> HDFS-8806.02.patch
>
>
> HDFS-7165 introduced a new metric for _number of missing blocks with 
> replication factor 1_. It is maintained as 
> {{UnderReplicatedBlocks#corruptReplOneBlocks}}. However, that variable is not 
> reset when other {{UnderReplicatedBlocks}} are cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8806) Inconsistent metrics: number of missing blocks with replication factor 1 not properly cleared

2015-07-24 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8806:

Affects Version/s: 2.7.0
   Labels: metrics  (was: )
 Hadoop Flags: Reviewed

> Inconsistent metrics: number of missing blocks with replication factor 1 not 
> properly cleared
> -
>
> Key: HDFS-8806
> URL: https://issues.apache.org/jira/browse/HDFS-8806
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>  Labels: metrics
> Attachments: HDFS-8806.00.patch, HDFS-8806.01.patch, 
> HDFS-8806.02.patch
>
>
> HDFS-7165 introduced a new metric for _number of missing blocks with 
> replication factor 1_. It is maintained as 
> {{UnderReplicatedBlocks#corruptReplOneBlocks}}. However, that variable is not 
> reset when other {{UnderReplicatedBlocks}} are cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >