[jira] [Updated] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-06-18 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8632:
---
Status: Patch Available  (was: Open)

> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-HDFS-7285-00.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-06-18 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8632:
---
Attachment: HDFS-8632-HDFS-7285-00.patch

> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-HDFS-7285-00.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8254) In StripedDataStreamer, it is hard to tolerate datanode failure in the leading streamer

2015-06-18 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8254:
--
Attachment: h8254_20150618.patch

Good catch.

h8254_20150618.patch: fixes the typo.

> In StripedDataStreamer, it is hard to tolerate datanode failure in the 
> leading streamer
> ---
>
> Key: HDFS-8254
> URL: https://issues.apache.org/jira/browse/HDFS-8254
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h8254_20150526.patch, h8254_20150526b.patch, 
> h8254_20150616.patch, h8254_20150618.patch
>
>
> StripedDataStreamer javadoc is shown below.
> {code}
>  * The StripedDataStreamer class is used by {@link DFSStripedOutputStream}.
>  * There are two kinds of StripedDataStreamer, leading streamer and ordinary
>  * stream. Leading streamer requests a block group from NameNode, unwraps
>  * it to located blocks and transfers each located block to its corresponding
>  * ordinary streamer via a blocking queue.
> {code}
> Leading streamer is the streamer with index 0.  When the datanode of the 
> leading streamer fails, the other steamers cannot continue since no one will 
> request a block group from NameNode anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8638) Move the quota commands out from dfsadmi.

2015-06-18 Thread surendra singh lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14593027#comment-14593027
 ] 

surendra singh lilhore commented on HDFS-8638:
--

Thanks [~aw] for looking in to this issue..

Currently admin check is missing , So can we add admin check in quota API ?.

> Move the quota commands out from dfsadmi.
> -
>
> Key: HDFS-8638
> URL: https://issues.apache.org/jira/browse/HDFS-8638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: surendra singh lilhore
>Assignee: surendra singh lilhore
>
> Currently for setQuota() API in FSNamesystem we don't have any superuser 
> check.
> So with the reference of [HDFS-7323 | 
> https://issues.apache.org/jira/browse/HDFS-7323] we should move quota 
> commands from dfsadmin or if we want it in dfsadmin then we should add check 
> for superuser privileges. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14593017#comment-14593017
 ] 

Rakesh R commented on HDFS-6564:


We have an example mentioned in the previous note, is that fine?
{code}
org.slf4j.Logger LOG = 
org.slf4j.LoggerFactory.getLogger(org.apache.hadoop.hdfs.protocol.CachePoolInfo.class);
{code}

I think I can include link of slf4j manual in the note like, {{For more 
details: http://www.slf4j.org/manual.html}}. Does this sound good to you ?

{code}
Users may need special attention for this change while upgrading to this 
version. Previously hdfs client was using commons-logging as the logging 
framework. With this change it uses slf4j framework. For more details: 
http://www.slf4j.org/manual.html. Also, 
org.apache.hadoop.hdfs.protocol.CachePoolInfo#LOG public static member variable 
has been removed as it is not used anywhere. Users need to correct their code 
if any one has a reference to this variable. One can retrieve the named logger 
via the logging framework of their choice directly like, org.slf4j.Logger LOG = 
org.slf4j.LoggerFactory.getLogger((org.apache.hadoop.hdfs.protocol.CachePoolInfo.class);
{code}

> Use slf4j instead of common-logging in hdfs-client
> --
>
> Key: HDFS-6564
> URL: https://issues.apache.org/jira/browse/HDFS-6564
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch, 
> HDFS-6564-03.patch
>
>
> hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14593015#comment-14593015
 ] 

Rakesh R commented on HDFS-6564:


We have an example mentioned in the previous note.
{code}
org.slf4j.Logger LOG = 
org.slf4j.LoggerFactory.getLogger(org.apache.hadoop.hdfs.protocol.CachePoolInfo.class);
{code}
I think I can include link of slf4j manual in the note like below. Does this 
sound good to you ?
{code}
Users may need special attention for this change while upgrading to this 
version. Previously hdfs client was using commons-logging as the logging 
framework. With this change it uses slf4j framework. For more details: 
http://www.slf4j.org/manual.html. Also, 
org.apache.hadoop.hdfs.protocol.CachePoolInfo#LOG public static member variable 
has been removed as it is not used anywhere. Users need to correct their code 
if any one has a reference to this variable. One can retrieve the named logger 
via the logging framework of their choice directly like, 
org.apache.commons.logging.Log LOG = 
org.apache.commons.logging.LogFactory.getLogger(org.apache.hadoop.hdfs.protocol.CachePoolInfo.class);
{code}

> Use slf4j instead of common-logging in hdfs-client
> --
>
> Key: HDFS-6564
> URL: https://issues.apache.org/jira/browse/HDFS-6564
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch, 
> HDFS-6564-03.patch
>
>
> hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8633) Fix mismatch for default value of dfs.datanode.readahead.bytes in DFSConfigKeys

2015-06-18 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14593004#comment-14593004
 ] 

Yongjun Zhang commented on HDFS-8633:
-

Thanks Ray. +1, will commit soon.


> Fix mismatch for default value of dfs.datanode.readahead.bytes in 
> DFSConfigKeys
> ---
>
> Key: HDFS-8633
> URL: https://issues.apache.org/jira/browse/HDFS-8633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: newbie, supportability
> Attachments: HDFS-8633.001.patch
>
>
> Found this using the XML/Config verifier.  One of these properties has two 
> digits swapped.
>   XML Property: dfs.datanode.readahead.bytes
>   XML Value:4193404
>   Config Name:  DFS_DATANODE_READAHEAD_BYTES_DEFAULT
>   Config Value: 4194304
> What is the intended value?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8480) Fix performance and timeout issues in HDFS-7929: use hard-links instead of copying edit logs

2015-06-18 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8480:

Attachment: HDFS-8480.03.patch

Patch 02 actually made a very silly mistake and emulates an old layout version 
by {{CURRENT_LAYOUT_VERSION - 1}} instead of incrementing it by 1.

So the conclusion is {{EditLogInputStream}} has the logic of reading from 
_older_ layout versions. And it works for both upgrade and inotify scenarios.

> Fix performance and timeout issues in HDFS-7929: use hard-links instead of 
> copying edit logs
> 
>
> Key: HDFS-8480
> URL: https://issues.apache.org/jira/browse/HDFS-8480
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Critical
> Attachments: HDFS-8480.00.patch, HDFS-8480.01.patch, 
> HDFS-8480.02.patch, HDFS-8480.03.patch
>
>
> HDFS-7929 copies existing edit logs to the storage directory of the upgraded 
> {{NameNode}}. This slows down the upgrade process. This JIRA aims to use 
> hard-linking instead of per-op copying to achieve the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8638) Move the quota commands out from dfsadmi.

2015-06-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592989#comment-14592989
 ] 

Allen Wittenauer commented on HDFS-8638:


Quota most definitely should have admin checks.  Allowing users to set this for 
themselves would be very very bad.  

> Move the quota commands out from dfsadmi.
> -
>
> Key: HDFS-8638
> URL: https://issues.apache.org/jira/browse/HDFS-8638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: surendra singh lilhore
>Assignee: surendra singh lilhore
>
> Currently for setQuota() API in FSNamesystem we don't have any superuser 
> check.
> So with the reference of [HDFS-7323 | 
> https://issues.apache.org/jira/browse/HDFS-7323] we should move quota 
> commands from dfsadmin or if we want it in dfsadmin then we should add check 
> for superuser privileges. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8638) Move the quota commands out from dfsadmi.

2015-06-18 Thread surendra singh lilhore (JIRA)
surendra singh lilhore created HDFS-8638:


 Summary: Move the quota commands out from dfsadmi.
 Key: HDFS-8638
 URL: https://issues.apache.org/jira/browse/HDFS-8638
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore


Currently for setQuota() API in FSNamesystem we don't have any superuser check.
So with the reference of [HDFS-7323 | 
https://issues.apache.org/jira/browse/HDFS-7323] we should move quota commands 
from dfsadmin or if we want it in dfsadmin then we should add check for 
superuser privileges. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-18 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592976#comment-14592976
 ] 

Sean Busbey commented on HDFS-6564:
---

Since before removal the LOG was commons.logging, I would like to include how 
to get the member from that framework. Including how to do it in slf4j as well 
sounds fine.

> Use slf4j instead of common-logging in hdfs-client
> --
>
> Key: HDFS-6564
> URL: https://issues.apache.org/jira/browse/HDFS-6564
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch, 
> HDFS-6564-03.patch
>
>
> hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592958#comment-14592958
 ] 

Rakesh R commented on HDFS-6564:


Please ignore the previous, small correction to the logger class.

{code}
Users may need attention for this change while upgrading to this version. 
org.apache.hadoop.hdfs.protocol.CachePoolInfo#LOG public static member variable 
has been removed as it is not used anywhere. Users need to correct their code 
if any one has a reference to this variable. One can retrieve the named logger 
via the logging framework of their choice directly like, org.slf4j.Logger LOG = 
org.slf4j.LoggerFactory.getLogger(org.apache.hadoop.hdfs.protocol.CachePoolInfo.class);
{code}

> Use slf4j instead of common-logging in hdfs-client
> --
>
> Key: HDFS-6564
> URL: https://issues.apache.org/jira/browse/HDFS-6564
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch, 
> HDFS-6564-03.patch
>
>
> hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592955#comment-14592955
 ] 

Rakesh R commented on HDFS-6564:


I've modified the note, kindly review it again.

{code}
Users may need attention for this change while upgrading to this version. 
org.apache.hadoop.hdfs.protocol.CachePoolInfo#LOG public static member variable 
has been removed as it is not used anywhere. Users need to correct their code 
if any one has a reference to this variable. One can retrieve the named logger 
via the logging framework of their choice directly like, 
org.apache.commons.logging.Log LOG = 
org.apache.commons.logging.LogFactory.getLogger(org.apache.hadoop.hdfs.protocol.CachePoolInfo.class);
{code}

> Use slf4j instead of common-logging in hdfs-client
> --
>
> Key: HDFS-6564
> URL: https://issues.apache.org/jira/browse/HDFS-6564
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch, 
> HDFS-6564-03.patch
>
>
> hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8582) Reduce failure messages when running datanode reconfiguration

2015-06-18 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-8582:

Attachment: HDFS-8582.004.patch

Thanks for these suggestions, [~cmccabe]. Here is the updated patch.

> Reduce failure messages when running datanode reconfiguration
> -
>
> Key: HDFS-8582
> URL: https://issues.apache.org/jira/browse/HDFS-8582
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-8582.000.patch, HDFS-8582.001.patch, 
> HDFS-8582.002.patch, HDFS-8582.003.patch, HDFS-8582.004.patch
>
>
> When running a DN reconfig to hotswap some drives, it spits out this output:
> {noformat}
> $ hdfs dfsadmin -reconfig datanode localhost:9023 status
> 15/06/09 14:58:10 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Reconfiguring status for DataNode[localhost:9023]: started at Tue Jun 09 
> 14:57:37 PDT 2015 and finished at Tue Jun 09 14:57:56 PDT 2015.
> FAILED: Change property 
> rpc.engine.org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolPB
> From: "org.apache.hadoop.ipc.ProtobufRpcEngine"
> To: ""
> Error: Property 
> rpc.engine.org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolPB is not 
> reconfigurable.
> FAILED: Change property mapreduce.client.genericoptionsparser.used
> From: "true"
> To: ""
> Error: Property mapreduce.client.genericoptionsparser.used is not 
> reconfigurable.
> FAILED: Change property rpc.engine.org.apache.hadoop.ipc.ProtocolMetaInfoPB
> From: "org.apache.hadoop.ipc.ProtobufRpcEngine"
> To: ""
> Error: Property rpc.engine.org.apache.hadoop.ipc.ProtocolMetaInfoPB 
> is not reconfigurable.
> SUCCESS: Change property dfs.datanode.data.dir
> From: "file:///data/1/user/dfs"
> To: "file:///data/1/user/dfs,file:///data/2/user/dfs"
> FAILED: Change property dfs.datanode.startup
> From: "REGULAR"
> To: ""
> Error: Property dfs.datanode.startup is not reconfigurable.
> FAILED: Change property 
> rpc.engine.org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolPB
> From: "org.apache.hadoop.ipc.ProtobufRpcEngine"
> To: ""
> Error: Property 
> rpc.engine.org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolPB is not 
> reconfigurable.
> FAILED: Change property 
> rpc.engine.org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolPB
> From: "org.apache.hadoop.ipc.ProtobufRpcEngine"
> To: ""
> Error: Property 
> rpc.engine.org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolPB is not 
> reconfigurable.
> FAILED: Change property 
> rpc.engine.org.apache.hadoop.tracing.TraceAdminProtocolPB
> From: "org.apache.hadoop.ipc.ProtobufRpcEngine"
> To: ""
> Error: Property 
> rpc.engine.org.apache.hadoop.tracing.TraceAdminProtocolPB is not 
> reconfigurable.
> {noformat}
> These failed messages are spurious and should not be shown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8462) Implement GETXATTRS and LISTXATTRS operation for WebImageViewer

2015-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592930#comment-14592930
 ] 

Hadoop QA commented on HDFS-8462:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  21m 22s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 44s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  2s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  1s | Site still builds. |
| {color:green}+1{color} | checkstyle |   2m 16s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 20s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 163m  9s | Tests passed in hadoop-hdfs. 
|
| | | 216m 46s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12740427/HDFS-8462-03.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / 5b5bb8d |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11407/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11407/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11407/console |


This message was automatically generated.

> Implement GETXATTRS and LISTXATTRS operation for WebImageViewer
> ---
>
> Key: HDFS-8462
> URL: https://issues.apache.org/jira/browse/HDFS-8462
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Akira AJISAKA
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-8462-00.patch, HDFS-8462-01.patch, 
> HDFS-8462-02.patch, HDFS-8462-03.patch
>
>
> In Hadoop 2.7.0, WebImageViewer supports the following operations:
> * {{GETFILESTATUS}}
> * {{LISTSTATUS}}
> * {{GETACLSTATUS}}
> I'm thinking it would be better for administrators if {{GETXATTRS}} and 
> {{LISTXATTRS}} are supported.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-18 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592927#comment-14592927
 ] 

Sean Busbey commented on HDFS-6564:
---

You should include mention of how to get the same functionality. In this case, 
they can retrieve the named logger via the logging framework of their choice 
directly. Since the previous member was commons-logging object, that would be 
done via {{LogFactory.getLog(CachePoolInfo.class)}} or 
{{LogFactory.getLog("org.apache.hadoop.hdfs.protocol.CachePoolInfo")}}.

> Use slf4j instead of common-logging in hdfs-client
> --
>
> Key: HDFS-6564
> URL: https://issues.apache.org/jira/browse/HDFS-6564
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch, 
> HDFS-6564-03.patch
>
>
> hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8608) Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks)

2015-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592921#comment-14592921
 ] 

Hadoop QA commented on HDFS-8608:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 59s | Pre-patch branch-2 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   6m 56s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 10s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 29s | The applied patch generated  3 
new checkstyle issues (total was 243, now 239). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 21s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 14s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m 35s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 161m 23s | Tests passed in hadoop-hdfs. 
|
| | | 208m 13s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12740479/HDFS-4366-branch-2.01.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | branch-2 / 86b75ac |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11405/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11405/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11405/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11405/console |


This message was automatically generated.

> Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in 
> UnderReplicatedBlocks and PendingReplicationBlocks)
> --
>
> Key: HDFS-8608
> URL: https://issues.apache.org/jira/browse/HDFS-8608
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 3.0.0
>
> Attachments: HDFS-4366-branch-2.00.patch, 
> HDFS-4366-branch-2.01.patch, HDFS-8608.00.patch, HDFS-8608.01.patch, 
> HDFS-8608.02.patch
>
>
> This JIRA aims to merges HDFS-7912 into trunk to minimize final patch when 
> merging the HDFS-7285 (erasure coding) branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8626) Reserved RBW space is not released if creation of RBW File fails

2015-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592917#comment-14592917
 ] 

Hadoop QA commented on HDFS-8626:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 25s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 45s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 57s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 19s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m 55s | Post-patch findbugs 
hadoop-hdfs-project/hadoop-hdfs compilation is broken. |
| {color:green}+1{color} | findbugs |   2m 55s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   0m 43s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 157m  2s | Tests failed in hadoop-hdfs. |
| | | 201m 39s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.fs.TestHdfsNativeCodeLoader |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12740486/HDFS-8626-03.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 5b5bb8d |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11409/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11409/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11409/console |


This message was automatically generated.

> Reserved RBW space is not released if creation of RBW File fails
> 
>
> Key: HDFS-8626
> URL: https://issues.apache.org/jira/browse/HDFS-8626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: kanaka kumar avvaru
>Assignee: kanaka kumar avvaru
>Priority: Blocker
> Attachments: HDFS-8626-01.patch, HDFS-8626-02.patch, 
> HDFS-8626-03.patch
>
>
> The DataNode reserves disk space for a full block when creating an RBW block 
> and will release the space when the block is finalized (introduced in 
> HDFS-6898) 
> But if the RBW file creation fails, the reserved space is not released back. 
> In a scenario, when the data node disk is full it causes no space left 
> {{IOException}}. Eventually even if the disk got cleaned,  the reserved space 
> is not release until the  Data Node is restarted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8633) Fix mismatch for default value of dfs.datanode.readahead.bytes in DFSConfigKeys

2015-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592915#comment-14592915
 ] 

Hadoop QA commented on HDFS-8633:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 44s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 55s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 161m 30s | Tests passed in hadoop-hdfs. 
|
| | | 199m 37s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12740459/HDFS-8633.001.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / 5b5bb8d |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11406/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11406/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11406/console |


This message was automatically generated.

> Fix mismatch for default value of dfs.datanode.readahead.bytes in 
> DFSConfigKeys
> ---
>
> Key: HDFS-8633
> URL: https://issues.apache.org/jira/browse/HDFS-8633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: newbie, supportability
> Attachments: HDFS-8633.001.patch
>
>
> Found this using the XML/Config verifier.  One of these properties has two 
> digits swapped.
>   XML Property: dfs.datanode.readahead.bytes
>   XML Value:4193404
>   Config Name:  DFS_DATANODE_READAHEAD_BYTES_DEFAULT
>   Config Value: 4194304
> What is the intended value?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-06-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592883#comment-14592883
 ] 

Rakesh R commented on HDFS-8632:


Thanks [~walter.k.su] for pointing out this. I will take care this.

> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592878#comment-14592878
 ] 

Rakesh R commented on HDFS-6564:


Thanks [~busbey]. Below is the draft release note, any comments. If everyone 
agree with this then will update the jira.
{code}
Users may need attention for this change while upgrading to this version. 
org.apache.hadoop.hdfs.protocol.CachePoolInfo#LOG static member variable has 
been removed as it is not used anywhere. Users need to correct their code if 
any one has a reference to this variable.
{code}

> Use slf4j instead of common-logging in hdfs-client
> --
>
> Key: HDFS-6564
> URL: https://issues.apache.org/jira/browse/HDFS-6564
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch, 
> HDFS-6564-03.patch
>
>
> hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-18 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592880#comment-14592880
 ] 

Rakesh R commented on HDFS-6564:


Thanks [~wheat9] for the advice. Attached another patch by removing the {{LOG}} 
variable.

> Use slf4j instead of common-logging in hdfs-client
> --
>
> Key: HDFS-6564
> URL: https://issues.apache.org/jira/browse/HDFS-6564
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch, 
> HDFS-6564-03.patch
>
>
> hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-18 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-6564:
---
Attachment: HDFS-6564-03.patch

> Use slf4j instead of common-logging in hdfs-client
> --
>
> Key: HDFS-6564
> URL: https://issues.apache.org/jira/browse/HDFS-6564
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch, 
> HDFS-6564-03.patch
>
>
> hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8582) Reduce failure messages when running datanode reconfiguration

2015-06-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592857#comment-14592857
 ] 

Colin Patrick McCabe commented on HDFS-8582:


Thanks, Eddy.  Looks good.

{code}
  public Configuration getNewConf() {
{code}
Should this be protected rather than public?  Also, it would make sense for 
this to be abstract in the base class.

{code}
  private static final List reconfigurableProperties = ...
{code}
Should be ALL_CAPS with underscores to indicate a constant that can't be changed

{code}
"\tStarts reconfiguration or gets the status of an ongoing 
reconfiguration.\n" +
"\tIt also displays the properties that are supported for 
reconfiguration.\n" +
{code}
This seems a bit confusing.  How about "Starts or stops a reconfiguration 
operation, or gets a list of reconfigurable properties."

{code}
+"SUCCESS: Change property %s%n\tFrom: \"%s\"%n\tTo: \"%s\"%n",
{code}
Changed, not change

+1 pending those changes

> Reduce failure messages when running datanode reconfiguration
> -
>
> Key: HDFS-8582
> URL: https://issues.apache.org/jira/browse/HDFS-8582
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-8582.000.patch, HDFS-8582.001.patch, 
> HDFS-8582.002.patch, HDFS-8582.003.patch
>
>
> When running a DN reconfig to hotswap some drives, it spits out this output:
> {noformat}
> $ hdfs dfsadmin -reconfig datanode localhost:9023 status
> 15/06/09 14:58:10 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Reconfiguring status for DataNode[localhost:9023]: started at Tue Jun 09 
> 14:57:37 PDT 2015 and finished at Tue Jun 09 14:57:56 PDT 2015.
> FAILED: Change property 
> rpc.engine.org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolPB
> From: "org.apache.hadoop.ipc.ProtobufRpcEngine"
> To: ""
> Error: Property 
> rpc.engine.org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolPB is not 
> reconfigurable.
> FAILED: Change property mapreduce.client.genericoptionsparser.used
> From: "true"
> To: ""
> Error: Property mapreduce.client.genericoptionsparser.used is not 
> reconfigurable.
> FAILED: Change property rpc.engine.org.apache.hadoop.ipc.ProtocolMetaInfoPB
> From: "org.apache.hadoop.ipc.ProtobufRpcEngine"
> To: ""
> Error: Property rpc.engine.org.apache.hadoop.ipc.ProtocolMetaInfoPB 
> is not reconfigurable.
> SUCCESS: Change property dfs.datanode.data.dir
> From: "file:///data/1/user/dfs"
> To: "file:///data/1/user/dfs,file:///data/2/user/dfs"
> FAILED: Change property dfs.datanode.startup
> From: "REGULAR"
> To: ""
> Error: Property dfs.datanode.startup is not reconfigurable.
> FAILED: Change property 
> rpc.engine.org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolPB
> From: "org.apache.hadoop.ipc.ProtobufRpcEngine"
> To: ""
> Error: Property 
> rpc.engine.org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolPB is not 
> reconfigurable.
> FAILED: Change property 
> rpc.engine.org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolPB
> From: "org.apache.hadoop.ipc.ProtobufRpcEngine"
> To: ""
> Error: Property 
> rpc.engine.org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolPB is not 
> reconfigurable.
> FAILED: Change property 
> rpc.engine.org.apache.hadoop.tracing.TraceAdminProtocolPB
> From: "org.apache.hadoop.ipc.ProtobufRpcEngine"
> To: ""
> Error: Property 
> rpc.engine.org.apache.hadoop.tracing.TraceAdminProtocolPB is not 
> reconfigurable.
> {noformat}
> These failed messages are spurious and should not be shown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8598) Add and optimize for get LocatedFileStatus in DFSClient

2015-06-18 Thread Yong Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592837#comment-14592837
 ] 

Yong Zhang commented on HDFS-8598:
--

Hi [~andrew.wang], thanks for your comment.
I got your point, I think it is better to keep more api for getting 
LocatedFileStatus, just like FileStatus.

> Add and optimize for get LocatedFileStatus  in DFSClient
> 
>
> Key: HDFS-8598
> URL: https://issues.apache.org/jira/browse/HDFS-8598
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Attachments: HDFS-8598.001.patch
>
>
> If we want to get all files block locations in one directory, we have to call 
> getFileBlockLocations for each file, it will take long time because of too 
> many request. 
> LocatedFileStatus has block location, but we can find it also call 
> getFileBlockLocations  for each file in DFSClient. this jira is trying to 
> optimize with only one RPC. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-06-18 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592833#comment-14592833
 ] 

Walter Su commented on HDFS-8632:
-

@InterfaceAudience.Private
@InterfaceStability.Evolving
1. ErasureCodingWorker
2. ErasureCodingSchemaManager
3. ErasureCodingZoneManager
4. ECCli
5. BlockPlacementPolicies

> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-18 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592831#comment-14592831
 ] 

Sean Busbey commented on HDFS-6564:
---

Whatever you decide to do about the LOG member, if it involves the class 
changing or it being removed please remember to flag the change and make a 
release note. That way downstream folks find out about it up front rather than 
by surprise when they are broken.

> Use slf4j instead of common-logging in hdfs-client
> --
>
> Key: HDFS-6564
> URL: https://issues.apache.org/jira/browse/HDFS-6564
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch
>
>
> hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-18 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592829#comment-14592829
 ] 

Aaron T. Myers commented on HDFS-6440:
--

Aha, that was totally it. Applied v8 correctly (surprised patch didn't complain 
about not being able to apply the binary diff) and the test passes just fine.

I'll wait for Jenkins to come back on the latest patch and then check that in.

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6564) Use slf4j instead of common-logging in hdfs-client

2015-06-18 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592828#comment-14592828
 ] 

Haohui Mai commented on HDFS-6564:
--

I think it is okay to get rid of the {{LOG}} member since it is a evolving 
interface. According to 
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/InterfaceClassification.html:

{quote}
Evolving
Evolving, but incompatible changes are allowed at minor release (i.e. m .x)
{quote}

> Use slf4j instead of common-logging in hdfs-client
> --
>
> Key: HDFS-6564
> URL: https://issues.apache.org/jira/browse/HDFS-6564
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Rakesh R
> Attachments: HDFS-6564-01.patch, HDFS-6564-02.patch
>
>
> hdfs-client should depends on slf4j instead of common-logging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-18 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592826#comment-14592826
 ] 

Jesse Yates commented on HDFS-6440:
---

Just went back to trunk and applied the patch directly (rather than using my 
branch) and test passed again w/o issue ($ mvn install -DskipTests; mvn clean 
test -Dtest=TestDFSUpgradeFromImage)

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-18 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592823#comment-14592823
 ] 

Jesse Yates commented on HDFS-6440:
---

Looks like maybe the binary changes from the tarball image aren't getting 
applied? That's all that I can think, since you fellas aren't seeing the 
cluster even start up.

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-18 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592818#comment-14592818
 ] 

Lei (Eddy) Xu commented on HDFS-6440:
-

[~jesse_yates] I am also running OSX, and re-produced it on OSX.

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-18 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592816#comment-14592816
 ] 

Aaron T. Myers commented on HDFS-6440:
--

Hey Jesse,

Here's the error that it's failing with on my (and Eddy's) box:

{noformat}
testUpgradeFromRel2ReservedImage(org.apache.hadoop.hdfs.TestDFSUpgradeFromImage)
  Time elapsed: 0.901 sec  <<< ERROR!
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory 
/home/atm/src/apache/hadoop.git/src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name-0-1
 is in an inconsistent state: storage directory does not exist or is not 
accessible.
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:327)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:215)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:976)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:685)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:809)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:793)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1482)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1208)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:971)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:882)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:814)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:473)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:432)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromRel2ReservedImage(TestDFSUpgradeFromImage.java:480)
{noformat}

I'll poke around myself a bit as well to see if I can figure out what's going 
on. This happens very reliably for me.

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6440) Support more than 2 NameNodes

2015-06-18 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HDFS-6440:
--
Attachment: hdfs-6440-trunk-v8.patch

Attaching updated patch w/ whitespace fix. Lets see what QA thinks of the 
upgrade test.

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6440) Support more than 2 NameNodes

2015-06-18 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HDFS-6440:
--
Status: Open  (was: Patch Available)

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6440) Support more than 2 NameNodes

2015-06-18 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HDFS-6440:
--
Status: Patch Available  (was: Open)

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-6440-trunk-v8.patch, hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-06-18 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592812#comment-14592812
 ] 

Jesse Yates commented on HDFS-6440:
---

I ran the test (independently) a couple of times locally after rebasing on 
latest trunk (as of 3hrs ago - YARN-3802) and didn't see any failures. However, 
when running a bigger battery of tests, my "multi-nn suite", I got the 
following failure:
{quote}
testUpgradeFromRel1BBWImage(org.apache.hadoop.hdfs.TestDFSUpgradeFromImage)  
Time elapsed: 11.115 sec  <<< ERROR!
java.io.IOException: Cannot obtain block length for 
LocatedBlock{BP-362680364-127.0.0.1-1434673340215:blk_7162739548153522810_1020; 
getBlockSize()=1024; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:59215,DS-8d6d81c3-5027-4fbf-a7c8-a8be86cb7e00,DISK]]}
at 
org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:394)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:336)
at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:272)
at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:263)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1184)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1168)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1154)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.dfsOpenFileWithRetries(TestDFSUpgradeFromImage.java:174)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyDir(TestDFSUpgradeFromImage.java:210)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyFileSystem(TestDFSUpgradeFromImage.java:225)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.upgradeAndVerify(TestDFSUpgradeFromImage.java:597)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromRel1BBWImage(TestDFSUpgradeFromImage.java:619)
{quote}

...but only sometimes. Is this at all what you guys are seeing too?

btw, I'm running OSX - maybe its a linux issue? I'm gonna re-submit (+ fix for 
whitespace) and see how jenkins likes it.


> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-6440-trunk-v4.patch, 
> hdfs-6440-trunk-v5.patch, hdfs-6440-trunk-v6.patch, hdfs-6440-trunk-v7.patch, 
> hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8448) Create REST Interface for Volumes

2015-06-18 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-8448:
---
Attachment: hdfs-8448-HDFS-7240.003.patch

Same patch but with branch name in capitals to see if that solve the issue of 
this patch building against trunk.



> Create REST Interface for Volumes
> -
>
> Key: HDFS-8448
> URL: https://issues.apache.org/jira/browse/HDFS-8448
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8448-HDFS-7240.003.patch, 
> hdfs-8448-hdfs-7240.001.patch, hdfs-8448-hdfs-7240.002.patch
>
>
> Create REST interfaces as specified in the architecture document.
> This Jira is for creating the Volume Interface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8602) Erasure Coding: Client can't read(decode) the EC files which have corrupt blocks.

2015-06-18 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592771#comment-14592771
 ] 

Kai Sasaki commented on HDFS-8602:
--

We confirmed this patch fixed the problem in our cluster.(delete block file and 
partially corruption). I updated a little. Thank you.

> Erasure Coding: Client can't read(decode) the EC files which have corrupt 
> blocks.
> -
>
> Key: HDFS-8602
> URL: https://issues.apache.org/jira/browse/HDFS-8602
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Kai Sasaki
> Fix For: HDFS-7285
>
> Attachments: HDFS-8602-HDFS-7285.01.patch, HDFS-8602.000.patch
>
>
> Before the DataNode(s) reporting bad block(s), when Client reads the EC file 
> which has bad blocks, Client gets hung up. And there are no error messages.
> (When Client reads the replicated file which has bad blocks, the bad blocks 
> are reconstructed at the same time, and Client can reads it.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8602) Erasure Coding: Client can't read(decode) the EC files which have corrupt blocks.

2015-06-18 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8602:
-
Attachment: HDFS-8602-HDFS-7285.01.patch

> Erasure Coding: Client can't read(decode) the EC files which have corrupt 
> blocks.
> -
>
> Key: HDFS-8602
> URL: https://issues.apache.org/jira/browse/HDFS-8602
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Kai Sasaki
> Fix For: HDFS-7285
>
> Attachments: HDFS-8602-HDFS-7285.01.patch, HDFS-8602.000.patch
>
>
> Before the DataNode(s) reporting bad block(s), when Client reads the EC file 
> which has bad blocks, Client gets hung up. And there are no error messages.
> (When Client reads the replicated file which has bad blocks, the bad blocks 
> are reconstructed at the same time, and Client can reads it.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8448) Create REST Interface for Volumes

2015-06-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8448:

Status: Patch Available  (was: Open)

> Create REST Interface for Volumes
> -
>
> Key: HDFS-8448
> URL: https://issues.apache.org/jira/browse/HDFS-8448
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8448-hdfs-7240.001.patch, 
> hdfs-8448-hdfs-7240.002.patch
>
>
> Create REST interfaces as specified in the architecture document.
> This Jira is for creating the Volume Interface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8448) Create REST Interface for Volumes

2015-06-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8448:

Status: Open  (was: Patch Available)

> Create REST Interface for Volumes
> -
>
> Key: HDFS-8448
> URL: https://issues.apache.org/jira/browse/HDFS-8448
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8448-hdfs-7240.001.patch, 
> hdfs-8448-hdfs-7240.002.patch
>
>
> Create REST interfaces as specified in the architecture document.
> This Jira is for creating the Volume Interface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8448) Create REST Interface for Volumes

2015-06-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592732#comment-14592732
 ] 

Anu Engineer commented on HDFS-8448:


looks like https://builds.apache.org is down. I will follow up in  a while.

> Create REST Interface for Volumes
> -
>
> Key: HDFS-8448
> URL: https://issues.apache.org/jira/browse/HDFS-8448
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8448-hdfs-7240.001.patch, 
> hdfs-8448-hdfs-7240.002.patch
>
>
> Create REST interfaces as specified in the architecture document.
> This Jira is for creating the Volume Interface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8448) Create REST Interface for Volumes

2015-06-18 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592725#comment-14592725
 ] 

Anu Engineer commented on HDFS-8448:


No idea why this patch failed to apply. It applies to the top of the tree 
cleanly on my machine. The console output link is also timing out. I suspect a 
build outage.



> Create REST Interface for Volumes
> -
>
> Key: HDFS-8448
> URL: https://issues.apache.org/jira/browse/HDFS-8448
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8448-hdfs-7240.001.patch, 
> hdfs-8448-hdfs-7240.002.patch
>
>
> Create REST interfaces as specified in the architecture document.
> This Jira is for creating the Volume Interface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8448) Create REST Interface for Volumes

2015-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592717#comment-14592717
 ] 

Hadoop QA commented on HDFS-8448:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12740428/hdfs-8448-hdfs-7240.002.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 5b5bb8d |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11408/console |


This message was automatically generated.

> Create REST Interface for Volumes
> -
>
> Key: HDFS-8448
> URL: https://issues.apache.org/jira/browse/HDFS-8448
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: hdfs-8448-hdfs-7240.001.patch, 
> hdfs-8448-hdfs-7240.002.patch
>
>
> Create REST interfaces as specified in the architecture document.
> This Jira is for creating the Volume Interface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8192) Eviction should key off used locked memory instead of ram disk free space

2015-06-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8192:

Attachment: HDFS-8192.04.patch

Thanks for the review [~xyao]!

v4 patch adds the following comment.

{code}
@@ -2966,6 +2966,8 @@ public void evictBlocks(long bytesNeeded) throws 
IOException {
 datanode.getMetrics().incrRamDiskBlocksEvictedWithoutRead();
   }

+  // Delete the block+meta files from RAM disk and release locked
+  // memory.
   removeOldReplica(replicaInfo, newReplicaInfo, blockFile, metaFile,
   blockFileUsed, metaFileUsed, bpid);
{code}

> Eviction should key off used locked memory instead of ram disk free space
> -
>
> Key: HDFS-8192
> URL: https://issues.apache.org/jira/browse/HDFS-8192
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8192.01.patch, HDFS-8192.02.patch, 
> HDFS-8192.03.patch, HDFS-8192.04.patch
>
>
> Eviction from RAM disk will be triggered when locked memory is insufficient 
> RAM to allocate a new replica for writing.
> This is a follow up to HDFS-8157. We no longer need the threshold 
> configuration keys since we won't be evicting based on thresholds any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8617) Throttle DiskChecker#checkDirs() speed.

2015-06-18 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592685#comment-14592685
 ] 

Andrew Wang commented on HDFS-8617:
---

In this case, we've got a single DiskChecker thread, presumably with ~1 
outstanding IO. This little amount of extra queuing from DiskChecker is still 
impacting HBase a lot. We're not talking 1000s of concurrent IOs here, which 
seems like a lot in any case since the default nr_requests is 128.

Maybe Esteban can provide some additional OS-level metrics, it could be that 
other effects like extra context switches are also hurting performance.

> Throttle DiskChecker#checkDirs() speed.
> ---
>
> Key: HDFS-8617
> URL: https://issues.apache.org/jira/browse/HDFS-8617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-8617.000.patch
>
>
> As described in HDFS-8564,  {{DiskChecker.checkDirs(finalizedDir)}} is 
> causing excessive I/Os because {{finalizedDirs}} might have up to 64K 
> sub-directories (HDFS-6482).
> This patch proposes to limit the rate of IO operations in 
> {{DiskChecker.checkDirs()}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8617) Throttle DiskChecker#checkDirs() speed.

2015-06-18 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592656#comment-14592656
 ] 

Haohui Mai commented on HDFS-8617:
--

Thanks for the comments. I think all of us agree that we should give a shot of 
performing {{checkDirs()}} in a thread that has low I/O priority in the 
background. Just some quick questions regarding the comments:

bq.  NCQ means there's substantial queuing on the disk itself, which isn't 
affected by ioprios.

I might be missing something. How does NCQ become a factor if the OS has not 
pushed the I/O to the disks? In production we have observed that the OS disk 
I/O queue can have as many as 1000 entries. This is two orders of magnitude 
larger compared to the capacity of the NCQ queue (32). Given the abundant 
amount of I/O requests, there is a very high chance that the OS scheduler will 
do a great job in terms of scheduling.

bq.  Given a typical IOPS is about 100 for HDD, throttling it to 50 or less 
calls per second should consume less than 1/2 IOPS. On Ext3/4 this can be better

I'm unsure whether this is the right math. I just checked the code. It looks 
like {{checkDir()}} mostly performs read-only operations on the metadata of the 
underlying filesystem. The metadata can be fully cached thus the parameter can 
be way off (and for SSD the parameter needs to be recalculated). That comes 
back to the point that it is difficult to determine the right parameter for 
various configuration. The difficulties of finding the parameter leads me to 
believe that using throttling here is flawed.

> Throttle DiskChecker#checkDirs() speed.
> ---
>
> Key: HDFS-8617
> URL: https://issues.apache.org/jira/browse/HDFS-8617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-8617.000.patch
>
>
> As described in HDFS-8564,  {{DiskChecker.checkDirs(finalizedDir)}} is 
> causing excessive I/Os because {{finalizedDirs}} might have up to 64K 
> sub-directories (HDFS-6482).
> This patch proposes to limit the rate of IO operations in 
> {{DiskChecker.checkDirs()}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8192) Eviction should key off used locked memory instead of ram disk free space

2015-06-18 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592653#comment-14592653
 ] 

Xiaoyu Yao commented on HDFS-8192:
--

Thanks for fixing this, [~arpitagarwal]. +1 for the v03 patch, which looks 
pretty good to me. 
Only one suggestion:
Can you add some comment in FSDataSetImpl#evictBlocks() regarding how the cache 
usage is decremented upon eviction? It takes me a while to figure it out in 
FsVolumeImpl#OnBlockFileDeletion().

> Eviction should key off used locked memory instead of ram disk free space
> -
>
> Key: HDFS-8192
> URL: https://issues.apache.org/jira/browse/HDFS-8192
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8192.01.patch, HDFS-8192.02.patch, 
> HDFS-8192.03.patch
>
>
> Eviction from RAM disk will be triggered when locked memory is insufficient 
> RAM to allocate a new replica for writing.
> This is a follow up to HDFS-8157. We no longer need the threshold 
> configuration keys since we won't be evicting based on thresholds any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8624) fix javadoc broken links

2015-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592639#comment-14592639
 ] 

Hadoop QA commented on HDFS-8624:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 24s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 32s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 54s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 26s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 42s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 17s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 162m 22s | Tests failed in hadoop-hdfs. |
| | | 208m 59s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestFSNamesystemMBean |
|   | hadoop.hdfs.server.namenode.TestQuotaByStorageType |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12740353/HDFS-8624.01.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6e0a9f9 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11404/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11404/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11404/console |


This message was automatically generated.

> fix javadoc broken links
> 
>
> Key: HDFS-8624
> URL: https://issues.apache.org/jira/browse/HDFS-8624
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Attachments: HDFS-8624.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8457) Ozone: Refactor FsDatasetSpi to pull up HDFS-agnostic functionality into parent interface

2015-06-18 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592614#comment-14592614
 ] 

Jitendra Nath Pandey commented on HDFS-8457:


+1

> Ozone: Refactor FsDatasetSpi to pull up HDFS-agnostic functionality into 
> parent interface
> -
>
> Key: HDFS-8457
> URL: https://issues.apache.org/jira/browse/HDFS-8457
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8457-HDFS-7240.01.patch, 
> HDFS-8457-HDFS-7240.02.patch, HDFS-8457-HDFS-7240.03.patch, 
> HDFS-8457-HDFS-7240.04.patch, HDFS-8457-HDFS-7240.05.patch, 
> HDFS-8457-HDFS-7240.06.patch, HDFS-8457-HDFS-7240.07.patch
>
>
> FsDatasetSpi can be split up into HDFS-specific and HDFS-agnostic parts. The 
> HDFS-specific parts can continue to be retained in FsDataSpi while those 
> relating to volume management, block pools and upgrade can be moved to a 
> parent interface.
> There will be no change to implementations of FsDatasetSpi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8626) Reserved RBW space is not released if creation of RBW File fails

2015-06-18 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592599#comment-14592599
 ] 

Arpit Agarwal commented on HDFS-8626:
-

Hi [~kanaka], thank you for adding the test case. I don't think the reflection 
trick in the test is working, else we'd get an exception while writing the file 
correct?

> Reserved RBW space is not released if creation of RBW File fails
> 
>
> Key: HDFS-8626
> URL: https://issues.apache.org/jira/browse/HDFS-8626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: kanaka kumar avvaru
>Assignee: kanaka kumar avvaru
>Priority: Blocker
> Attachments: HDFS-8626-01.patch, HDFS-8626-02.patch, 
> HDFS-8626-03.patch
>
>
> The DataNode reserves disk space for a full block when creating an RBW block 
> and will release the space when the block is finalized (introduced in 
> HDFS-6898) 
> But if the RBW file creation fails, the reserved space is not released back. 
> In a scenario, when the data node disk is full it causes no space left 
> {{IOException}}. Eventually even if the disk got cleaned,  the reserved space 
> is not release until the  Data Node is restarted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8238) Move ClientProtocol to the hdfs-client

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592579#comment-14592579
 ] 

Hudson commented on HDFS-8238:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2178 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2178/])
HDFS-8238. Move ClientProtocol to the hdfs-client. Contributed by Takanobu 
Asanuma. (wheat9: rev b8327744884bf86b01b8998849e2a42fb9e5c249)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeModeException.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeModeException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
Update CHANGES.txt for HDFS-8238. (wheat9: rev 
2de586f60ded874b2c962d0ca8ef2ca7cad25518)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Move ClientProtocol to the hdfs-client
> --
>
> Key: HDFS-8238
> URL: https://issues.apache.org/jira/browse/HDFS-8238
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Takanobu Asanuma
> Fix For: 2.8.0
>
> Attachments: HDFS-8238.000.patch, HDFS-8238.001.patch, 
> HDFS-8238.002.patch, HDFS-8238.003.patch
>
>
> The {{ClientProtocol}} class defines the RPC protocol between the NN and the 
> client. This jira proposes to move it into the hdfs-client module.
> The jira needs to move:
> * {{o.a.h.hdfs.server.namenode.SafeModeException}} and 
> {{o.a.h.hdfs.server.namenode.NotReplicatedYetException}} to the hdfs-client 
> package
> * Remove the reference of {{DistributedFileSystem}} in the javadoc
> * Create a copy of {{DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY}} in 
> {{HdfsClientConfigKeys}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8615) Correct HTTP method in WebHDFS document

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592575#comment-14592575
 ] 

Hudson commented on HDFS-8615:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2178 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2178/])
HDFS-8615. Correct HTTP method in WebHDFS document. Contributed by Brahma Reddy 
Battula. (aajisaka: rev 1a169a26bcc4e4bab7697965906cb9ca7ef8329e)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Correct HTTP method in WebHDFS document
> ---
>
> Key: HDFS-8615
> URL: https://issues.apache.org/jira/browse/HDFS-8615
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.1
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HDFS-8615.patch
>
>
> For example, {{-X PUT}} should be removed from the following curl command.
> {code:title=WebHDFS.md}
> ### Get ACL Status
> * Submit a HTTP GET request.
> curl -i -X PUT 
> "http://:/webhdfs/v1/?op=GETACLSTATUS"
> {code}
> Other than this example, there are several commands which {{-X PUT}} should 
> be removed from. We should fix them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592580#comment-14592580
 ] 

Hudson commented on HDFS-7285:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2178 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2178/])
HDFS-8605. Merge Refactor of DFSOutputStream from HDFS-7285 branch. 
(vinayakumarb) (wang: rev 1c13519e1e7588c3e2974138d37bf3449ca8b3df)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Attachments: ECAnalyzer.py, ECParser.py, HDFS-7285-initial-PoC.patch, 
> HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
> HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
> HDFSErasureCodingPhaseITestPlan.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8446) Separate safemode related operations in GetBlockLocations()

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592576#comment-14592576
 ] 

Hudson commented on HDFS-8446:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2178 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2178/])
HDFS-8446. Separate safemode related operations in GetBlockLocations(). 
Contributed by Haohui Mai. (wheat9: rev 
015535dc0ad00c2ba357afb3d1e283e56ddda0d6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


> Separate safemode related operations in GetBlockLocations()
> ---
>
> Key: HDFS-8446
> URL: https://issues.apache.org/jira/browse/HDFS-8446
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8446.000.patch, HDFS-8446.001.patch, 
> HDFS-8446.002.patch, HDFS-8446.003.patch
>
>
> Currently {{FSNamesystem#GetBlockLocations()}} has some special cases when 
> the NN is in SafeMode. This jira proposes to refactor the code to improve 
> readability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8589) Fix unused imports in BPServiceActor and BlockReportLeaseManager

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592571#comment-14592571
 ] 

Hudson commented on HDFS-8589:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2178 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2178/])
HDFS-8589. Fix unused imports in BPServiceActor and BlockReportLeaseManager 
(cmccabe) (cmccabe: rev 45ced38f10fcb9f831218b890786aaeb7987fed4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockReportLeaseManager.java


> Fix unused imports in BPServiceActor and BlockReportLeaseManager
> 
>
> Key: HDFS-8589
> URL: https://issues.apache.org/jira/browse/HDFS-8589
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-8589.001.patch
>
>
> Fix unused imports in BPServiceActor and BlockReportLeaseManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6249) Output AclEntry in PBImageXmlWriter

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592581#comment-14592581
 ] 

Hudson commented on HDFS-6249:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2178 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2178/])
HDFS-6249. Output AclEntry in PBImageXmlWriter. Contributed by surendra singh 
lilhore. (aajisaka: rev cc432885adb0182c2c5b3bf92edde12231fd567c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForAcl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Output AclEntry in PBImageXmlWriter
> ---
>
> Key: HDFS-6249
> URL: https://issues.apache.org/jira/browse/HDFS-6249
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 2.4.0
>Reporter: Akira AJISAKA
>Assignee: surendra singh lilhore
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HDFS-6249.patch, HDFS-6249_1.patch
>
>
> It would be useful if {{PBImageXmlWriter}} outputs {{AclEntry}} also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8605) Merge Refactor of DFSOutputStream from HDFS-7285 branch

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592574#comment-14592574
 ] 

Hudson commented on HDFS-8605:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2178 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2178/])
HDFS-8605. Merge Refactor of DFSOutputStream from HDFS-7285 branch. 
(vinayakumarb) (wang: rev 1c13519e1e7588c3e2974138d37bf3449ca8b3df)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


> Merge Refactor of DFSOutputStream from HDFS-7285 branch
> ---
>
> Key: HDFS-8605
> URL: https://issues.apache.org/jira/browse/HDFS-8605
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-8605-01.patch, HDFS-8605-02.patch, 
> HDFS-8605-03.patch
>
>
> Merging the refactor of DFSOutput stream from HDFS-7285 branch
> This will make things easy while merging changes from trunk periodically to 
> HDFS-7285.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8605) Merge Refactor of DFSOutputStream from HDFS-7285 branch

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592567#comment-14592567
 ] 

Hudson commented on HDFS-8605:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8038 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8038/])
HDFS-8605. Merge Refactor of DFSOutputStream from HDFS-7285 branch. 
(vinayakumarb) (wang: rev 1c13519e1e7588c3e2974138d37bf3449ca8b3df)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


> Merge Refactor of DFSOutputStream from HDFS-7285 branch
> ---
>
> Key: HDFS-8605
> URL: https://issues.apache.org/jira/browse/HDFS-8605
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-8605-01.patch, HDFS-8605-02.patch, 
> HDFS-8605-03.patch
>
>
> Merging the refactor of DFSOutput stream from HDFS-7285 branch
> This will make things easy while merging changes from trunk periodically to 
> HDFS-7285.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592566#comment-14592566
 ] 

Hudson commented on HDFS-7285:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8038 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8038/])
HDFS-8605. Merge Refactor of DFSOutputStream from HDFS-7285 branch. 
(vinayakumarb) (wang: rev 1c13519e1e7588c3e2974138d37bf3449ca8b3df)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Attachments: ECAnalyzer.py, ECParser.py, HDFS-7285-initial-PoC.patch, 
> HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
> HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
> HDFSErasureCodingPhaseITestPlan.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8457) Ozone: Refactor FsDatasetSpi to pull up HDFS-agnostic functionality into parent interface

2015-06-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8457:

Attachment: HDFS-8457-HDFS-7240.07.patch

> Ozone: Refactor FsDatasetSpi to pull up HDFS-agnostic functionality into 
> parent interface
> -
>
> Key: HDFS-8457
> URL: https://issues.apache.org/jira/browse/HDFS-8457
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8457-HDFS-7240.01.patch, 
> HDFS-8457-HDFS-7240.02.patch, HDFS-8457-HDFS-7240.03.patch, 
> HDFS-8457-HDFS-7240.04.patch, HDFS-8457-HDFS-7240.05.patch, 
> HDFS-8457-HDFS-7240.06.patch, HDFS-8457-HDFS-7240.07.patch
>
>
> FsDatasetSpi can be split up into HDFS-specific and HDFS-agnostic parts. The 
> HDFS-specific parts can continue to be retained in FsDataSpi while those 
> relating to volume management, block pools and upgrade can be moved to a 
> parent interface.
> There will be no change to implementations of FsDatasetSpi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8457) Ozone: Refactor FsDatasetSpi to pull up HDFS-agnostic functionality into parent interface

2015-06-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8457:

Attachment: HDFS-8457.06-07.delta

Thanks for the detailed review [~jnp]! v7 patch addresses all your comments. 
For #1, I wanted to avoid repeated reflection but having the reflection is 
probably cleaner to read.

bq. What is the reason for not moving public List 
getFinalizedBlocks(String bpid) to DatasetSpi?
This method is only used by the block scanner which is an FsDataset-specific 
implementation. Also I am not sure the concept of 'finalize' will be helpful 
for other Ozone containers.

One additional change in the v7 patch is that I moved {{initReplicaRecovery}} 
and {{updateReplicaUnderRecovery}} back to FsDatasetSpi. We can revisit this 
when we get to replica recovery for Ozone containers.

> Ozone: Refactor FsDatasetSpi to pull up HDFS-agnostic functionality into 
> parent interface
> -
>
> Key: HDFS-8457
> URL: https://issues.apache.org/jira/browse/HDFS-8457
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8457-HDFS-7240.01.patch, 
> HDFS-8457-HDFS-7240.02.patch, HDFS-8457-HDFS-7240.03.patch, 
> HDFS-8457-HDFS-7240.04.patch, HDFS-8457-HDFS-7240.05.patch, 
> HDFS-8457-HDFS-7240.06.patch
>
>
> FsDatasetSpi can be split up into HDFS-specific and HDFS-agnostic parts. The 
> HDFS-specific parts can continue to be retained in FsDataSpi while those 
> relating to volume management, block pools and upgrade can be moved to a 
> parent interface.
> There will be no change to implementations of FsDatasetSpi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8457) Ozone: Refactor FsDatasetSpi to pull up HDFS-agnostic functionality into parent interface

2015-06-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8457:

Attachment: (was: HDFS-8457.06-07.delta)

> Ozone: Refactor FsDatasetSpi to pull up HDFS-agnostic functionality into 
> parent interface
> -
>
> Key: HDFS-8457
> URL: https://issues.apache.org/jira/browse/HDFS-8457
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8457-HDFS-7240.01.patch, 
> HDFS-8457-HDFS-7240.02.patch, HDFS-8457-HDFS-7240.03.patch, 
> HDFS-8457-HDFS-7240.04.patch, HDFS-8457-HDFS-7240.05.patch, 
> HDFS-8457-HDFS-7240.06.patch
>
>
> FsDatasetSpi can be split up into HDFS-specific and HDFS-agnostic parts. The 
> HDFS-specific parts can continue to be retained in FsDataSpi while those 
> relating to volume management, block pools and upgrade can be moved to a 
> parent interface.
> There will be no change to implementations of FsDatasetSpi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6249) Output AclEntry in PBImageXmlWriter

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592558#comment-14592558
 ] 

Hudson commented on HDFS-6249:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #230 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/230/])
HDFS-6249. Output AclEntry in PBImageXmlWriter. Contributed by surendra singh 
lilhore. (aajisaka: rev cc432885adb0182c2c5b3bf92edde12231fd567c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForAcl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Output AclEntry in PBImageXmlWriter
> ---
>
> Key: HDFS-6249
> URL: https://issues.apache.org/jira/browse/HDFS-6249
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 2.4.0
>Reporter: Akira AJISAKA
>Assignee: surendra singh lilhore
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HDFS-6249.patch, HDFS-6249_1.patch
>
>
> It would be useful if {{PBImageXmlWriter}} outputs {{AclEntry}} also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8238) Move ClientProtocol to the hdfs-client

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592556#comment-14592556
 ] 

Hudson commented on HDFS-8238:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #230 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/230/])
HDFS-8238. Move ClientProtocol to the hdfs-client. Contributed by Takanobu 
Asanuma. (wheat9: rev b8327744884bf86b01b8998849e2a42fb9e5c249)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeModeException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeModeException.java
Update CHANGES.txt for HDFS-8238. (wheat9: rev 
2de586f60ded874b2c962d0ca8ef2ca7cad25518)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Move ClientProtocol to the hdfs-client
> --
>
> Key: HDFS-8238
> URL: https://issues.apache.org/jira/browse/HDFS-8238
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Takanobu Asanuma
> Fix For: 2.8.0
>
> Attachments: HDFS-8238.000.patch, HDFS-8238.001.patch, 
> HDFS-8238.002.patch, HDFS-8238.003.patch
>
>
> The {{ClientProtocol}} class defines the RPC protocol between the NN and the 
> client. This jira proposes to move it into the hdfs-client module.
> The jira needs to move:
> * {{o.a.h.hdfs.server.namenode.SafeModeException}} and 
> {{o.a.h.hdfs.server.namenode.NotReplicatedYetException}} to the hdfs-client 
> package
> * Remove the reference of {{DistributedFileSystem}} in the javadoc
> * Create a copy of {{DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY}} in 
> {{HdfsClientConfigKeys}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8605) Merge Refactor of DFSOutputStream from HDFS-7285 branch

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592551#comment-14592551
 ] 

Hudson commented on HDFS-8605:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #230 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/230/])
HDFS-8605. Merge Refactor of DFSOutputStream from HDFS-7285 branch. 
(vinayakumarb) (wang: rev 1c13519e1e7588c3e2974138d37bf3449ca8b3df)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


> Merge Refactor of DFSOutputStream from HDFS-7285 branch
> ---
>
> Key: HDFS-8605
> URL: https://issues.apache.org/jira/browse/HDFS-8605
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-8605-01.patch, HDFS-8605-02.patch, 
> HDFS-8605-03.patch
>
>
> Merging the refactor of DFSOutput stream from HDFS-7285 branch
> This will make things easy while merging changes from trunk periodically to 
> HDFS-7285.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592557#comment-14592557
 ] 

Hudson commented on HDFS-7285:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #230 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/230/])
HDFS-8605. Merge Refactor of DFSOutputStream from HDFS-7285 branch. 
(vinayakumarb) (wang: rev 1c13519e1e7588c3e2974138d37bf3449ca8b3df)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java


> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Attachments: ECAnalyzer.py, ECParser.py, HDFS-7285-initial-PoC.patch, 
> HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
> HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
> HDFSErasureCodingPhaseITestPlan.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8615) Correct HTTP method in WebHDFS document

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592552#comment-14592552
 ] 

Hudson commented on HDFS-8615:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #230 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/230/])
HDFS-8615. Correct HTTP method in WebHDFS document. Contributed by Brahma Reddy 
Battula. (aajisaka: rev 1a169a26bcc4e4bab7697965906cb9ca7ef8329e)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md


> Correct HTTP method in WebHDFS document
> ---
>
> Key: HDFS-8615
> URL: https://issues.apache.org/jira/browse/HDFS-8615
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.1
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HDFS-8615.patch
>
>
> For example, {{-X PUT}} should be removed from the following curl command.
> {code:title=WebHDFS.md}
> ### Get ACL Status
> * Submit a HTTP GET request.
> curl -i -X PUT 
> "http://:/webhdfs/v1/?op=GETACLSTATUS"
> {code}
> Other than this example, there are several commands which {{-X PUT}} should 
> be removed from. We should fix them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8446) Separate safemode related operations in GetBlockLocations()

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592553#comment-14592553
 ] 

Hudson commented on HDFS-8446:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #230 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/230/])
HDFS-8446. Separate safemode related operations in GetBlockLocations(). 
Contributed by Haohui Mai. (wheat9: rev 
015535dc0ad00c2ba357afb3d1e283e56ddda0d6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java


> Separate safemode related operations in GetBlockLocations()
> ---
>
> Key: HDFS-8446
> URL: https://issues.apache.org/jira/browse/HDFS-8446
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8446.000.patch, HDFS-8446.001.patch, 
> HDFS-8446.002.patch, HDFS-8446.003.patch
>
>
> Currently {{FSNamesystem#GetBlockLocations()}} has some special cases when 
> the NN is in SafeMode. This jira proposes to refactor the code to improve 
> readability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8589) Fix unused imports in BPServiceActor and BlockReportLeaseManager

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592548#comment-14592548
 ] 

Hudson commented on HDFS-8589:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #230 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/230/])
HDFS-8589. Fix unused imports in BPServiceActor and BlockReportLeaseManager 
(cmccabe) (cmccabe: rev 45ced38f10fcb9f831218b890786aaeb7987fed4)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockReportLeaseManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java


> Fix unused imports in BPServiceActor and BlockReportLeaseManager
> 
>
> Key: HDFS-8589
> URL: https://issues.apache.org/jira/browse/HDFS-8589
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-8589.001.patch
>
>
> Fix unused imports in BPServiceActor and BlockReportLeaseManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8406) Lease recovery continually failed

2015-06-18 Thread Keith Turner (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592542#comment-14592542
 ] 

Keith Turner commented on HDFS-8406:


I ran into this again while doing more Accumulo testing.   When I ran fsck, I 
noticed it complained about only 2 of 3 replicas.

{noformat}
$ hdfs fsck /accumulo/wal/worker9+9997/edd1e126-2a9d-4e4d-bfaf-b1d9297fbe25 
-openforwrite -files -blocks -locations
FSCK started by ec2-user (auth:SIMPLE) from /10.1.5.85 for path 
/accumulo/wal/worker9+9997/edd1e126-2a9d-4e4d-bfaf-b1d9297fbe25 at Thu Jun 18 
20:52:47 UTC 2015
/accumulo/wal/worker9+9997/edd1e126-2a9d-4e4d-bfaf-b1d9297fbe25 583619497 
bytes, 1 block(s), OPENFORWRITE:  Under replicated 
BP-16428079-10.1.5.35-1434651935105:blk_1073745249_4675{blockUCState=COMMITTED, 
primaryNodeIndex=2, 
replicas=[ReplicaUnderConstruction[[DISK]DS-f29e50f3-055a-4970-aa19-848e8f3caba5:NORMAL:10.1.5.137:50010|RBW],
 
ReplicaUnderConstruction[[DISK]DS-057e290c-012c-4a48-b64d-a9d540984f18:NORMAL:10.1.5.234:50010|RBW],
 
ReplicaUnderConstruction[[DISK]DS-96cebc69-2159-4551-a2aa-8651d4d361d7:NORMAL:10.1.5.174:50010|RWR]]}.
 Target Replicas is 3 but found 2 replica(s).
{noformat}

In the message above it says {{Target Replicas is 3 but found 2 replica(s)}}, 
however in the {{replicas=...}} section of the message there are three replicas 
listed.   I went to the 3 datanodes listed and found the block existed on each 
node and had the same md5 checksum.

> Lease recovery continually failed
> -
>
> Key: HDFS-8406
> URL: https://issues.apache.org/jira/browse/HDFS-8406
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Keith Turner
>
> While testing Accumulo on a cluster and killing processes, I ran into a 
> situation where the lease on an accumulo write ahead log in HDFS could not be 
> recovered.   Even restarting HDFS and Accumulo would not fix the problem.
> The following message was seen in an Accumulo tablet server log immediately 
> before the tablet server was killed.
> {noformat}
> 2015-05-14 17:12:37,466 [hdfs.DFSClient] WARN : DFSOutputStream 
> ResponseProcessor exception  for block 
> BP-802741494-10.1.5.6-1431557089849:blk_1073932823_192060
> java.io.IOException: Bad response ERROR for block 
> BP-802741494-10.1.5.6-1431557089849:blk_1073932823_192060 from datanode 
> 10.1.5.9:50010
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:897)
> 2015-05-14 17:12:37,466 [hdfs.DFSClient] WARN : Error Recovery for block 
> BP-802741494-10.1.5.6-1431557089849:blk_1073932823_192060 in pipeline 
> 10.1.5.55:50010, 10.1.5.9:5
> {noformat}
> Before recovering data from a write ahead log, the Accumulo master attempts 
> to recover the lease.   This repeatedly failed with messages like the 
> following.
> {noformat}
> 2015-05-14 17:14:54,301 [recovery.HadoopLogCloser] WARN : Error recovering 
> lease on 
> hdfs://10.1.5.6:1/accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
>  failed to create file 
> /accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2 for 
> DFSClient_NONMAPREDUCE_950713214_16 for client 10.1.5.158 because 
> pendingCreates is non-null but no leases found.
> {noformat}
> Below is some info from the NN logs for the problematic file.
> {noformat}
> [ec2-user@leader2 logs]$ grep 3a731759-3594-4535-8086-245 
> hadoop-ec2-user-namenode-leader2.log 
> 2015-05-14 17:10:46,299 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocateBlock: 
> /accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2. 
> BP-802741494-10.1.5.6-1431557089849 
> blk_1073932823_192060{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-ffe07d7d-0e68-45b8-b3d5-c976f1716481:NORMAL:10.1.5.55:50010|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-6efec702-3f1f-4ec0-a31f-de947e7e6097:NORMAL:10.1.5.9:50010|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-5e27df17-abf8-47df-b4bc-c38d0cd426ea:NORMAL:10.1.5.45:50010|RBW]]}
> 2015-05-14 17:10:46,628 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: /accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2 for 
> DFSClient_NONMAPREDUCE_-1128465883_16
> 2015-05-14 17:14:49,288 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: recoverLease: [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-1128465883_16, pendingcreates: 1], 
> src=/accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2 from 
> client DFSClient_NONMAPREDUCE_-1128465883_16
> 2015-05-14 17:14:49,288 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-1128465883_16, pendingcreates: 1], 
> src=/

[jira] [Created] (HDFS-8637) OzoneHandler : Add Error Table

2015-06-18 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-8637:
--

 Summary: OzoneHandler : Add Error Table
 Key: HDFS-8637
 URL: https://issues.apache.org/jira/browse/HDFS-8637
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Anu Engineer
Assignee: Anu Engineer


Define all errors coming out of REST protocol. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8626) Reserved RBW space is not released if creation of RBW File fails

2015-06-18 Thread kanaka kumar avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592488#comment-14592488
 ] 

kanaka kumar avvaru commented on HDFS-8626:
---

Updated patch with test case. [~arpitagarwal], please review.

> Reserved RBW space is not released if creation of RBW File fails
> 
>
> Key: HDFS-8626
> URL: https://issues.apache.org/jira/browse/HDFS-8626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: kanaka kumar avvaru
>Assignee: kanaka kumar avvaru
>Priority: Blocker
> Attachments: HDFS-8626-01.patch, HDFS-8626-02.patch, 
> HDFS-8626-03.patch
>
>
> The DataNode reserves disk space for a full block when creating an RBW block 
> and will release the space when the block is finalized (introduced in 
> HDFS-6898) 
> But if the RBW file creation fails, the reserved space is not released back. 
> In a scenario, when the data node disk is full it causes no space left 
> {{IOException}}. Eventually even if the disk got cleaned,  the reserved space 
> is not release until the  Data Node is restarted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8626) Reserved RBW space is not released if creation of RBW File fails

2015-06-18 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru updated HDFS-8626:
--
Attachment: HDFS-8626-03.patch

> Reserved RBW space is not released if creation of RBW File fails
> 
>
> Key: HDFS-8626
> URL: https://issues.apache.org/jira/browse/HDFS-8626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: kanaka kumar avvaru
>Assignee: kanaka kumar avvaru
>Priority: Blocker
> Attachments: HDFS-8626-01.patch, HDFS-8626-02.patch, 
> HDFS-8626-03.patch
>
>
> The DataNode reserves disk space for a full block when creating an RBW block 
> and will release the space when the block is finalized (introduced in 
> HDFS-6898) 
> But if the RBW file creation fails, the reserved space is not released back. 
> In a scenario, when the data node disk is full it causes no space left 
> {{IOException}}. Eventually even if the disk got cleaned,  the reserved space 
> is not release until the  Data Node is restarted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8078) HDFS client gets errors trying to to connect to IPv6 DataNode

2015-06-18 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592426#comment-14592426
 ] 

Elliott Clark commented on HDFS-8078:
-

+1 (non-binding) on the latest patch.

I agree that more unit-testing and other things will be needed before we can 
declare full ipv6 support. However this is a pretty huge step in the right 
direction that we shouldn't let bit rot. It includes tests to keep regressions 
in this code from popping up and is tested on a real cluster.


> HDFS client gets errors trying to to connect to IPv6 DataNode
> -
>
> Key: HDFS-8078
> URL: https://issues.apache.org/jira/browse/HDFS-8078
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: Nate Edel
>Assignee: Nate Edel
>  Labels: BB2015-05-TBR, ipv6
> Attachments: HDFS-8078.10.patch, HDFS-8078.9.patch
>
>
> 1st exception, on put:
> 15/03/23 18:43:18 WARN hdfs.DFSClient: DataStreamer Exception
> java.lang.IllegalArgumentException: Does not contain a valid host:port 
> authority: 2401:db00:1010:70ba:face:0:8:0:50010
>   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
>   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1607)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
> Appears to actually stem from code in DataNodeID which assumes it's safe to 
> append together (ipaddr + ":" + port) -- which is OK for IPv4 and not OK for 
> IPv6.  NetUtils.createSocketAddr( ) assembles a Java URI object, which 
> requires the format proto://[2401:db00:1010:70ba:face:0:8:0]:50010
> Currently using InetAddress.getByName() to validate IPv6 (guava 
> InetAddresses.forString has been flaky) but could also use our own parsing. 
> (From logging this, it seems like a low-enough frequency call that the extra 
> object creation shouldn't be problematic, and for me the slight risk of 
> passing in bad input that is not actually an IPv4 or IPv6 address and thus 
> calling an external DNS lookup is outweighed by getting the address 
> normalized and avoiding rewriting parsing.)
> Alternatively, sun.net.util.IPAddressUtil.isIPv6LiteralAddress()
> ---
> 2nd exception (on datanode)
> 15/04/13 13:18:07 ERROR datanode.DataNode: 
> dev1903.prn1.facebook.com:50010:DataXceiver error processing unknown 
> operation  src: /2401:db00:20:7013:face:0:7:0:54152 dst: 
> /2401:db00:11:d010:face:0:2f:0:50010
> java.io.EOFException
> at java.io.DataInputStream.readShort(DataInputStream.java:315)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226)
> at java.lang.Thread.run(Thread.java:745)
> Which also comes as client error "-get: 2401 is not an IP string literal."
> This one has existing parsing logic which needs to shift to the last colon 
> rather than the first.  Should also be a tiny bit faster by using lastIndexOf 
> rather than split.  Could alternatively use the techniques above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8608) Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks)

2015-06-18 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8608:

Attachment: HDFS-4366-branch-2.01.patch

> Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in 
> UnderReplicatedBlocks and PendingReplicationBlocks)
> --
>
> Key: HDFS-8608
> URL: https://issues.apache.org/jira/browse/HDFS-8608
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 3.0.0
>
> Attachments: HDFS-4366-branch-2.00.patch, 
> HDFS-4366-branch-2.01.patch, HDFS-8608.00.patch, HDFS-8608.01.patch, 
> HDFS-8608.02.patch
>
>
> This JIRA aims to merges HDFS-7912 into trunk to minimize final patch when 
> merging the HDFS-7285 (erasure coding) branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8608) Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks)

2015-06-18 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8608:

Attachment: (was: HDFS-4366-branch-2.01.patch)

> Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in 
> UnderReplicatedBlocks and PendingReplicationBlocks)
> --
>
> Key: HDFS-8608
> URL: https://issues.apache.org/jira/browse/HDFS-8608
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 3.0.0
>
> Attachments: HDFS-4366-branch-2.00.patch, 
> HDFS-4366-branch-2.01.patch, HDFS-8608.00.patch, HDFS-8608.01.patch, 
> HDFS-8608.02.patch
>
>
> This JIRA aims to merges HDFS-7912 into trunk to minimize final patch when 
> merging the HDFS-7285 (erasure coding) branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8238) Move ClientProtocol to the hdfs-client

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592342#comment-14592342
 ] 

Hudson commented on HDFS-8238:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/221/])
HDFS-8238. Move ClientProtocol to the hdfs-client. Contributed by Takanobu 
Asanuma. (wheat9: rev b8327744884bf86b01b8998849e2a42fb9e5c249)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeModeException.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SafeModeException.java
Update CHANGES.txt for HDFS-8238. (wheat9: rev 
2de586f60ded874b2c962d0ca8ef2ca7cad25518)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Move ClientProtocol to the hdfs-client
> --
>
> Key: HDFS-8238
> URL: https://issues.apache.org/jira/browse/HDFS-8238
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Takanobu Asanuma
> Fix For: 2.8.0
>
> Attachments: HDFS-8238.000.patch, HDFS-8238.001.patch, 
> HDFS-8238.002.patch, HDFS-8238.003.patch
>
>
> The {{ClientProtocol}} class defines the RPC protocol between the NN and the 
> client. This jira proposes to move it into the hdfs-client module.
> The jira needs to move:
> * {{o.a.h.hdfs.server.namenode.SafeModeException}} and 
> {{o.a.h.hdfs.server.namenode.NotReplicatedYetException}} to the hdfs-client 
> package
> * Remove the reference of {{DistributedFileSystem}} in the javadoc
> * Create a copy of {{DFSConfigKeys.DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY}} in 
> {{HdfsClientConfigKeys}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8446) Separate safemode related operations in GetBlockLocations()

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592346#comment-14592346
 ] 

Hudson commented on HDFS-8446:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/221/])
HDFS-8446. Separate safemode related operations in GetBlockLocations(). 
Contributed by Haohui Mai. (wheat9: rev 
015535dc0ad00c2ba357afb3d1e283e56ddda0d6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


> Separate safemode related operations in GetBlockLocations()
> ---
>
> Key: HDFS-8446
> URL: https://issues.apache.org/jira/browse/HDFS-8446
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8446.000.patch, HDFS-8446.001.patch, 
> HDFS-8446.002.patch, HDFS-8446.003.patch
>
>
> Currently {{FSNamesystem#GetBlockLocations()}} has some special cases when 
> the NN is in SafeMode. This jira proposes to refactor the code to improve 
> readability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6249) Output AclEntry in PBImageXmlWriter

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592344#comment-14592344
 ] 

Hudson commented on HDFS-6249:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/221/])
HDFS-6249. Output AclEntry in PBImageXmlWriter. Contributed by surendra singh 
lilhore. (aajisaka: rev cc432885adb0182c2c5b3bf92edde12231fd567c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForAcl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Output AclEntry in PBImageXmlWriter
> ---
>
> Key: HDFS-6249
> URL: https://issues.apache.org/jira/browse/HDFS-6249
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 2.4.0
>Reporter: Akira AJISAKA
>Assignee: surendra singh lilhore
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HDFS-6249.patch, HDFS-6249_1.patch
>
>
> It would be useful if {{PBImageXmlWriter}} outputs {{AclEntry}} also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8605) Merge Refactor of DFSOutputStream from HDFS-7285 branch

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592343#comment-14592343
 ] 

Hudson commented on HDFS-8605:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/221/])
HDFS-8605. Merge Refactor of DFSOutputStream from HDFS-7285 branch. 
(vinayakumarb) (wang: rev 1c13519e1e7588c3e2974138d37bf3449ca8b3df)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Merge Refactor of DFSOutputStream from HDFS-7285 branch
> ---
>
> Key: HDFS-8605
> URL: https://issues.apache.org/jira/browse/HDFS-8605
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 2.8.0
>
> Attachments: HDFS-8605-01.patch, HDFS-8605-02.patch, 
> HDFS-8605-03.patch
>
>
> Merging the refactor of DFSOutput stream from HDFS-7285 branch
> This will make things easy while merging changes from trunk periodically to 
> HDFS-7285.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592341#comment-14592341
 ] 

Hudson commented on HDFS-7285:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/221/])
HDFS-8605. Merge Refactor of DFSOutputStream from HDFS-7285 branch. 
(vinayakumarb) (wang: rev 1c13519e1e7588c3e2974138d37bf3449ca8b3df)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
> Attachments: ECAnalyzer.py, ECParser.py, HDFS-7285-initial-PoC.patch, 
> HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, 
> HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, 
> HDFSErasureCodingPhaseITestPlan.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8615) Correct HTTP method in WebHDFS document

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592345#comment-14592345
 ] 

Hudson commented on HDFS-8615:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/221/])
HDFS-8615. Correct HTTP method in WebHDFS document. Contributed by Brahma Reddy 
Battula. (aajisaka: rev 1a169a26bcc4e4bab7697965906cb9ca7ef8329e)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Correct HTTP method in WebHDFS document
> ---
>
> Key: HDFS-8615
> URL: https://issues.apache.org/jira/browse/HDFS-8615
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.4.1
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HDFS-8615.patch
>
>
> For example, {{-X PUT}} should be removed from the following curl command.
> {code:title=WebHDFS.md}
> ### Get ACL Status
> * Submit a HTTP GET request.
> curl -i -X PUT 
> "http://:/webhdfs/v1/?op=GETACLSTATUS"
> {code}
> Other than this example, there are several commands which {{-X PUT}} should 
> be removed from. We should fix them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8589) Fix unused imports in BPServiceActor and BlockReportLeaseManager

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592337#comment-14592337
 ] 

Hudson commented on HDFS-8589:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/221/])
HDFS-8589. Fix unused imports in BPServiceActor and BlockReportLeaseManager 
(cmccabe) (cmccabe: rev 45ced38f10fcb9f831218b890786aaeb7987fed4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockReportLeaseManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix unused imports in BPServiceActor and BlockReportLeaseManager
> 
>
> Key: HDFS-8589
> URL: https://issues.apache.org/jira/browse/HDFS-8589
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-8589.001.patch
>
>
> Fix unused imports in BPServiceActor and BlockReportLeaseManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8636) Tolerate disk errors during block pool initialization

2015-06-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8636:
--
Description: If the DN hits an IO error during block pool initialization it 
aborts since it does not respect the tolerated volumes configuration key. We 
saw this error in particular when initializing a new block pool slice, but 
would also apply to the scan to populate the replica map.  (was: If the DN hits 
an IO error during block pool initialization since it does not respect the 
tolerated volumes configuration key. We saw this error in particular when 
initializing a new block pool slice, but would also apply to the scan to 
populate the replica map.)

> Tolerate disk errors during block pool initialization
> -
>
> Key: HDFS-8636
> URL: https://issues.apache.org/jira/browse/HDFS-8636
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>
> If the DN hits an IO error during block pool initialization it aborts since 
> it does not respect the tolerated volumes configuration key. We saw this 
> error in particular when initializing a new block pool slice, but would also 
> apply to the scan to populate the replica map.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8636) Tolerate disk errors during block pool initialization

2015-06-18 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-8636:
-

 Summary: Tolerate disk errors during block pool initialization
 Key: HDFS-8636
 URL: https://issues.apache.org/jira/browse/HDFS-8636
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.0
Reporter: Andrew Wang
Assignee: Andrew Wang


If the DN hits an IO error during block pool initialization since it does not 
respect the tolerated volumes configuration key. We saw this error in 
particular when initializing a new block pool slice, but would also apply to 
the scan to populate the replica map.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8626) Reserved RBW space is not released if creation of RBW File fails

2015-06-18 Thread kanaka kumar avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592322#comment-14592322
 ] 

kanaka kumar avvaru commented on HDFS-8626:
---

I am trying to find a way to mock BlockPoolSlice/java.io.File to reproduce it 
in junit test case. :( But couldn't do it successfully yet.
Down to File level can't be done as implementation looks different from java 7 
& 8.

If its urgent for 2.7.1, we can go a head,, Otherwise I will try for some time 
to add test case.

> Reserved RBW space is not released if creation of RBW File fails
> 
>
> Key: HDFS-8626
> URL: https://issues.apache.org/jira/browse/HDFS-8626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: kanaka kumar avvaru
>Assignee: kanaka kumar avvaru
>Priority: Blocker
> Attachments: HDFS-8626-01.patch, HDFS-8626-02.patch
>
>
> The DataNode reserves disk space for a full block when creating an RBW block 
> and will release the space when the block is finalized (introduced in 
> HDFS-6898) 
> But if the RBW file creation fails, the reserved space is not released back. 
> In a scenario, when the data node disk is full it causes no space left 
> {{IOException}}. Eventually even if the disk got cleaned,  the reserved space 
> is not release until the  Data Node is restarted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8626) Reserved RBW space is not released if creation of RBW File fails

2015-06-18 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592303#comment-14592303
 ] 

Arpit Agarwal commented on HDFS-8626:
-

Thanks for updating the patch.

+1 for the change. Is it straightforward to add a test case?

> Reserved RBW space is not released if creation of RBW File fails
> 
>
> Key: HDFS-8626
> URL: https://issues.apache.org/jira/browse/HDFS-8626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: kanaka kumar avvaru
>Assignee: kanaka kumar avvaru
>Priority: Blocker
> Attachments: HDFS-8626-01.patch, HDFS-8626-02.patch
>
>
> The DataNode reserves disk space for a full block when creating an RBW block 
> and will release the space when the block is finalized (introduced in 
> HDFS-6898) 
> But if the RBW file creation fails, the reserved space is not released back. 
> In a scenario, when the data node disk is full it causes no space left 
> {{IOException}}. Eventually even if the disk got cleaned,  the reserved space 
> is not release until the  Data Node is restarted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8626) Reserved RBW space is not released if creation of RBW File fails

2015-06-18 Thread kanaka kumar avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592300#comment-14592300
 ] 

kanaka kumar avvaru commented on HDFS-8626:
---

 :) jenkins gave result while I was writing comment. I have updated the patch 
for comment now.

> Reserved RBW space is not released if creation of RBW File fails
> 
>
> Key: HDFS-8626
> URL: https://issues.apache.org/jira/browse/HDFS-8626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: kanaka kumar avvaru
>Assignee: kanaka kumar avvaru
>Priority: Blocker
> Attachments: HDFS-8626-01.patch, HDFS-8626-02.patch
>
>
> The DataNode reserves disk space for a full block when creating an RBW block 
> and will release the space when the block is finalized (introduced in 
> HDFS-6898) 
> But if the RBW file creation fails, the reserved space is not released back. 
> In a scenario, when the data node disk is full it causes no space left 
> {{IOException}}. Eventually even if the disk got cleaned,  the reserved space 
> is not release until the  Data Node is restarted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8626) Reserved RBW space is not released if creation of RBW File fails

2015-06-18 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru updated HDFS-8626:
--
Status: Patch Available  (was: Open)

> Reserved RBW space is not released if creation of RBW File fails
> 
>
> Key: HDFS-8626
> URL: https://issues.apache.org/jira/browse/HDFS-8626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: kanaka kumar avvaru
>Assignee: kanaka kumar avvaru
>Priority: Blocker
> Attachments: HDFS-8626-01.patch, HDFS-8626-02.patch
>
>
> The DataNode reserves disk space for a full block when creating an RBW block 
> and will release the space when the block is finalized (introduced in 
> HDFS-6898) 
> But if the RBW file creation fails, the reserved space is not released back. 
> In a scenario, when the data node disk is full it causes no space left 
> {{IOException}}. Eventually even if the disk got cleaned,  the reserved space 
> is not release until the  Data Node is restarted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8626) Reserved RBW space is not released if creation of RBW File fails

2015-06-18 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru updated HDFS-8626:
--
Attachment: HDFS-8626-02.patch

> Reserved RBW space is not released if creation of RBW File fails
> 
>
> Key: HDFS-8626
> URL: https://issues.apache.org/jira/browse/HDFS-8626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: kanaka kumar avvaru
>Assignee: kanaka kumar avvaru
>Priority: Blocker
> Attachments: HDFS-8626-01.patch, HDFS-8626-02.patch
>
>
> The DataNode reserves disk space for a full block when creating an RBW block 
> and will release the space when the block is finalized (introduced in 
> HDFS-6898) 
> But if the RBW file creation fails, the reserved space is not released back. 
> In a scenario, when the data node disk is full it causes no space left 
> {{IOException}}. Eventually even if the disk got cleaned,  the reserved space 
> is not release until the  Data Node is restarted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8608) Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks)

2015-06-18 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8608:

Attachment: HDFS-4366-branch-2.01.patch

{{TestFileTruncate}} passes locally. Attaching updated patch (by applying 
HDFS-7444 patch) to address failure in {{TestReplicationPolicy}}

> Merge HDFS-7912 to trunk and branch-2 (track BlockInfo instead of Block in 
> UnderReplicatedBlocks and PendingReplicationBlocks)
> --
>
> Key: HDFS-8608
> URL: https://issues.apache.org/jira/browse/HDFS-8608
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 3.0.0
>
> Attachments: HDFS-4366-branch-2.00.patch, 
> HDFS-4366-branch-2.01.patch, HDFS-8608.00.patch, HDFS-8608.01.patch, 
> HDFS-8608.02.patch
>
>
> This JIRA aims to merges HDFS-7912 into trunk to minimize final patch when 
> merging the HDFS-7285 (erasure coding) branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8626) Reserved RBW space is not released if creation of RBW File fails

2015-06-18 Thread kanaka kumar avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592281#comment-14592281
 ] 

kanaka kumar avvaru commented on HDFS-8626:
---

Thanks for the review [~arpitagarwal], I am aware of HDFS-8072 which is 
handling when client terminate while DN receiving the block. 

However this issue happens during the construction of block receiver as per the 
stack I observed. (Right now I don't have stack.. will update description with 
exception stack trace some time tomorrow) 

I will wait for the jenkins result before updating the patch for the given 
review comment.

 



> Reserved RBW space is not released if creation of RBW File fails
> 
>
> Key: HDFS-8626
> URL: https://issues.apache.org/jira/browse/HDFS-8626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: kanaka kumar avvaru
>Assignee: kanaka kumar avvaru
>Priority: Blocker
> Attachments: HDFS-8626-01.patch
>
>
> The DataNode reserves disk space for a full block when creating an RBW block 
> and will release the space when the block is finalized (introduced in 
> HDFS-6898) 
> But if the RBW file creation fails, the reserved space is not released back. 
> In a scenario, when the data node disk is full it causes no space left 
> {{IOException}}. Eventually even if the disk got cleaned,  the reserved space 
> is not release until the  Data Node is restarted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8626) Reserved RBW space is not released if creation of RBW File fails

2015-06-18 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-8626:
--
Status: Open  (was: Patch Available)

Canceling patch for addressing the comment above..

> Reserved RBW space is not released if creation of RBW File fails
> 
>
> Key: HDFS-8626
> URL: https://issues.apache.org/jira/browse/HDFS-8626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: kanaka kumar avvaru
>Assignee: kanaka kumar avvaru
>Priority: Blocker
> Attachments: HDFS-8626-01.patch
>
>
> The DataNode reserves disk space for a full block when creating an RBW block 
> and will release the space when the block is finalized (introduced in 
> HDFS-6898) 
> But if the RBW file creation fails, the reserved space is not released back. 
> In a scenario, when the data node disk is full it causes no space left 
> {{IOException}}. Eventually even if the disk got cleaned,  the reserved space 
> is not release until the  Data Node is restarted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8626) Reserved RBW space is not released if creation of RBW File fails

2015-06-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592275#comment-14592275
 ] 

Hadoop QA commented on HDFS-8626:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 48s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 38s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 11s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 15s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 162m 30s | Tests passed in hadoop-hdfs. 
|
| | | 208m 41s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12740370/HDFS-8626-01.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 2ad6687 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11403/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11403/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11403/console |


This message was automatically generated.

> Reserved RBW space is not released if creation of RBW File fails
> 
>
> Key: HDFS-8626
> URL: https://issues.apache.org/jira/browse/HDFS-8626
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: kanaka kumar avvaru
>Assignee: kanaka kumar avvaru
>Priority: Blocker
> Attachments: HDFS-8626-01.patch
>
>
> The DataNode reserves disk space for a full block when creating an RBW block 
> and will release the space when the block is finalized (introduced in 
> HDFS-6898) 
> But if the RBW file creation fails, the reserved space is not released back. 
> In a scenario, when the data node disk is full it causes no space left 
> {{IOException}}. Eventually even if the disk got cleaned,  the reserved space 
> is not release until the  Data Node is restarted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6898) DN must reserve space for a full block when an RBW block is created

2015-06-18 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592259#comment-14592259
 ] 

Arpit Agarwal commented on HDFS-6898:
-

2.6.0 is correct, sorry about the oversight.

> DN must reserve space for a full block when an RBW block is created
> ---
>
> Key: HDFS-6898
> URL: https://issues.apache.org/jira/browse/HDFS-6898
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Gopal V
>Assignee: Arpit Agarwal
> Fix For: 2.6.0
>
> Attachments: HDFS-6898.01.patch, HDFS-6898.03.patch, 
> HDFS-6898.04.patch, HDFS-6898.05.patch, HDFS-6898.06.patch, HDFS-6898.07.patch
>
>
> DN will successfully create two RBW blocks on the same volume even if the 
> free space is sufficient for just one full block.
> One or both block writers may subsequently get a DiskOutOfSpace exception. 
> This can be avoided by allocating space up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6898) DN must reserve space for a full block when an RBW block is created

2015-06-18 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HDFS-6898:
--
Fix Version/s: 2.6.0

[~arpitagarwal], the JIRA misses fix-version. From what I see in CHANGES.txt 
and git-log, setting this to 2.6.0, please correct it if I am wrong.

> DN must reserve space for a full block when an RBW block is created
> ---
>
> Key: HDFS-6898
> URL: https://issues.apache.org/jira/browse/HDFS-6898
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.5.0
>Reporter: Gopal V
>Assignee: Arpit Agarwal
> Fix For: 2.6.0
>
> Attachments: HDFS-6898.01.patch, HDFS-6898.03.patch, 
> HDFS-6898.04.patch, HDFS-6898.05.patch, HDFS-6898.06.patch, HDFS-6898.07.patch
>
>
> DN will successfully create two RBW blocks on the same volume even if the 
> free space is sufficient for just one full block.
> One or both block writers may subsequently get a DiskOutOfSpace exception. 
> This can be avoided by allocating space up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8446) Separate safemode related operations in GetBlockLocations()

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592237#comment-14592237
 ] 

Hudson commented on HDFS-8446:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2160 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2160/])
HDFS-8446. Separate safemode related operations in GetBlockLocations(). 
Contributed by Haohui Mai. (wheat9: rev 
015535dc0ad00c2ba357afb3d1e283e56ddda0d6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


> Separate safemode related operations in GetBlockLocations()
> ---
>
> Key: HDFS-8446
> URL: https://issues.apache.org/jira/browse/HDFS-8446
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8446.000.patch, HDFS-8446.001.patch, 
> HDFS-8446.002.patch, HDFS-8446.003.patch
>
>
> Currently {{FSNamesystem#GetBlockLocations()}} has some special cases when 
> the NN is in SafeMode. This jira proposes to refactor the code to improve 
> readability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8589) Fix unused imports in BPServiceActor and BlockReportLeaseManager

2015-06-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14592230#comment-14592230
 ] 

Hudson commented on HDFS-8589:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2160 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2160/])
HDFS-8589. Fix unused imports in BPServiceActor and BlockReportLeaseManager 
(cmccabe) (cmccabe: rev 45ced38f10fcb9f831218b890786aaeb7987fed4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockReportLeaseManager.java


> Fix unused imports in BPServiceActor and BlockReportLeaseManager
> 
>
> Key: HDFS-8589
> URL: https://issues.apache.org/jira/browse/HDFS-8589
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HDFS-8589.001.patch
>
>
> Fix unused imports in BPServiceActor and BlockReportLeaseManager



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >