[jira] [Commented] (HDFS-8019) Erasure Coding: erasure coding chunk buffer allocation and management

2015-05-14 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14545001#comment-14545001
 ] 

Kai Zheng commented on HDFS-8019:
-

In my understanding, coding buffer allocating and managing would be coordinated 
by higher layer. Some factors I could think of for now:
* What kinds of buffer, heap buffer or direct buffer. This is determined by the 
configured codec and coder. If it's a Java one, then heap buffer is good 
enough; otherwise if it's a native one, direct buffer would be good;
* How many coding tasks are allowed to perform concurrently, which might be 
determined by other configuration or facts how powerful in CPU and how richful 
in memory.

As a basic facility provided for higher layer, I thought it would be good 
enough to have some API for the upper layer to adjust the values. If in future 
we found such items are good to be configurable, we can do it then. What 
concerned me to have them now is that once we add them, then it's concerned to 
deprecate/remove/change them later if find they're not actually useful or easy 
to conflict with other items.

> Erasure Coding: erasure coding chunk buffer allocation and management
> -
>
> Key: HDFS-8019
> URL: https://issues.apache.org/jira/browse/HDFS-8019
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Vinayakumar B
> Attachments: HDFS-8019-HDFS-7285-01.patch, 
> HDFS-8019-HDFS-7285-02.patch
>
>
> As a task of HDFS-7344, this is to come up a chunk buffer pool allocating and 
> managing coding chunk buffers, either based on on-heap or off-heap. Note this 
> assumes some DataNodes are powerful in computing and performing EC coding 
> work, so better to have this dedicated buffer pool and management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8375) Add cellSize as an XAttr to ECZone

2015-05-14 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544996#comment-14544996
 ] 

Vinayakumar B commented on HDFS-8375:
-

HDFS-8408 is raised for Rivisit of {{ErasureCodingInfo}} as current patch is 
already little big.

> Add cellSize as an XAttr to ECZone
> --
>
> Key: HDFS-8375
> URL: https://issues.apache.org/jira/browse/HDFS-8375
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8375-HDFS-7285-01.patch
>
>
> Add {{cellSize}} as an Xattr for ECZone. as discussed 
> [here|https://issues.apache.org/jira/browse/HDFS-8347?focusedCommentId=14539108&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14539108]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8408) Revisit and refactor ErasureCodingInfo

2015-05-14 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-8408:
---

 Summary: Revisit and refactor ErasureCodingInfo
 Key: HDFS-8408
 URL: https://issues.apache.org/jira/browse/HDFS-8408
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Vinayakumar B
Assignee: Vinayakumar B


As mentioned in HDFS-8375 
[here|https://issues.apache.org/jira/browse/HDFS-8375?focusedCommentId=14544618&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14544618]
 
{{ErasureCodingInfo}} needs a revisit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4868) Clean up error message when trying to snapshot using ViewFileSystem

2015-05-14 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544986#comment-14544986
 ] 

Rakesh R commented on HDFS-4868:


[~cnauroth] shall we close this jira as we have done the necessary changes to 
support snapshot methods in {{ViewFileSystem}}.

> Clean up error message when trying to snapshot using ViewFileSystem
> ---
>
> Key: HDFS-4868
> URL: https://issues.apache.org/jira/browse/HDFS-4868
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 3.0.0
>Reporter: Stephen Chu
>Priority: Minor
>
> Snapshots aren't supported for the ViewFileSystem. When users try to create a 
> snapshot, they'll run into a message like the following:
> {code}
> schu-mbp:presentation schu$ hadoop fs -createSnapshot /user/schu
> -createSnapshot: Fatal internal error
> java.lang.UnsupportedOperationException: ViewFileSystem doesn't support 
> createSnapshot
>   at org.apache.hadoop.fs.FileSystem.createSnapshot(FileSystem.java:2285)
>   at 
> org.apache.hadoop.fs.shell.SnapshotCommands$CreateSnapshot.processArguments(SnapshotCommands.java:87)
>   at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:305)
> {code}
> To make things more readable and avoid confusion, it would be helpful to 
> clean up the error message stacktrace and just state that ViewFileSystem 
> doesn't support createSnapshot, similar to what was done in HDFS-4846. The 
> "fatal internal error" message is a bit scary and it might be useful to 
> remove that message to avoid confusion from operators.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8375) Add cellSize as an XAttr to ECZone

2015-05-14 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544983#comment-14544983
 ] 

Vinayakumar B commented on HDFS-8375:
-

bq. 1. Maybe let's take this chance to update schema to ecSchema in 
{HdfsFileStatus}}?
Sure I will make this change.

bq. With this change we should probably revisit the relationship 
ErasureCodingInfo and ErasureCodingZoneInfo. If ErasureCodingInfo is to 
represent all EC-related info for a file, then it should include cell size, and 
this structure can be used to encapsulate all required info in places like 
StripedBlockUtil.
Yes, I too felt same. Now ErasureCodingInfo and ErasureCodingZoneInfo carries 
almost same information. of-course except {{cellSize}}. And morever 
{{ErasureCodingInfo}} not really used anywhere, atleast till now. IMO, we can 
remove it and use {{ErasureCodingZoneInfo}}. If something required in future we 
can re-consider adding it. What you say? 
This work can be done in another Jira IMO.

bq. I haven't yet finished reviewing the FSNamesystem changes, will complete my 
review later today.
Sure, take your time. Meanwhile I will update patch for first comment.


> Add cellSize as an XAttr to ECZone
> --
>
> Key: HDFS-8375
> URL: https://issues.apache.org/jira/browse/HDFS-8375
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8375-HDFS-7285-01.patch
>
>
> Add {{cellSize}} as an Xattr for ECZone. as discussed 
> [here|https://issues.apache.org/jira/browse/HDFS-8347?focusedCommentId=14539108&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14539108]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8366) Erasure Coding: Make the timeout parameter of polling blocking queue configurable in DFSStripedOutputStream

2015-05-14 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544984#comment-14544984
 ] 

Li Bo commented on HDFS-8366:
-

Thanks for Kai's careful review. I will commit it later.

> Erasure Coding: Make the timeout parameter of polling blocking queue 
> configurable in DFSStripedOutputStream
> ---
>
> Key: HDFS-8366
> URL: https://issues.apache.org/jira/browse/HDFS-8366
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8366-001.patch, HDFS-8366-HDFS-7285-02.patch
>
>
> The timeout of getting striped or ended block in 
> {{DFSStripedOutputStream#Coodinator}} should be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8366) Erasure Coding: Make the timeout parameter of polling blocking queue configurable in DFSStripedOutputStream

2015-05-14 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544982#comment-14544982
 ] 

Kai Zheng commented on HDFS-8366:
-

The patch LGTM. +1

> Erasure Coding: Make the timeout parameter of polling blocking queue 
> configurable in DFSStripedOutputStream
> ---
>
> Key: HDFS-8366
> URL: https://issues.apache.org/jira/browse/HDFS-8366
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8366-001.patch, HDFS-8366-HDFS-7285-02.patch
>
>
> The timeout of getting striped or ended block in 
> {{DFSStripedOutputStream#Coodinator}} should be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8367) BlockInfoStriped can also receive schema at its creation

2015-05-14 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8367:
-
Status: Patch Available  (was: Open)

> BlockInfoStriped can also receive schema at its creation
> 
>
> Key: HDFS-8367
> URL: https://issues.apache.org/jira/browse/HDFS-8367
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: EC
> Attachments: HDFS-8367-FYI-v2.patch, HDFS-8367-FYI.patch, 
> HDFS-8367-HDFS-7285-01.patch, HDFS-8367-HDFS-7285-02.patch, 
> HDFS-8367.1.patch, HDFS-8467-HDFS-7285-03.patch
>
>
> {{BlockInfoStriped}} should receive the total information for erasure coding 
> as {{ECSchema}}. This JIRA changes the constructor interface and its 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6888) Allow selectively audit logging ops

2015-05-14 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-6888:

Labels: log  (was: BB2015-05-RFC log)

> Allow selectively audit logging ops 
> 
>
> Key: HDFS-6888
> URL: https://issues.apache.org/jira/browse/HDFS-6888
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.5.0
>Reporter: Kihwal Lee
>Assignee: Chen He
>  Labels: log
> Fix For: 2.8.0
>
> Attachments: HDFS-6888-2.patch, HDFS-6888-3.patch, HDFS-6888-4.patch, 
> HDFS-6888-5.patch, HDFS-6888-6.patch, HDFS-6888.008.patch, 
> HDFS-6888.07.patch, HDFS-6888.patch
>
>
> The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
> one of the most called method, users have noticed that audit log is now 
> filled with this.  Since we now have HTTP request logging, this seems 
> unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8367) BlockInfoStriped can also receive schema at its creation

2015-05-14 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8367:
-
Attachment: HDFS-8467-HDFS-7285-03.patch

> BlockInfoStriped can also receive schema at its creation
> 
>
> Key: HDFS-8367
> URL: https://issues.apache.org/jira/browse/HDFS-8367
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: EC
> Attachments: HDFS-8367-FYI-v2.patch, HDFS-8367-FYI.patch, 
> HDFS-8367-HDFS-7285-01.patch, HDFS-8367-HDFS-7285-02.patch, 
> HDFS-8367.1.patch, HDFS-8467-HDFS-7285-03.patch
>
>
> {{BlockInfoStriped}} should receive the total information for erasure coding 
> as {{ECSchema}}. This JIRA changes the constructor interface and its 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6888) Allow selectively audit logging ops

2015-05-14 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-6888:

  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s:   (was: 2.6.0)
Release Note: Specific HDFS ops can be selectively excluded from audit 
logging via 'dfs.namenode.audit.log.debug.cmdlist' configuration.
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.
Thanks [~airbots] for the contribution.
Thanks everyone for reviews and great suggestions.

> Allow selectively audit logging ops 
> 
>
> Key: HDFS-6888
> URL: https://issues.apache.org/jira/browse/HDFS-6888
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.5.0
>Reporter: Kihwal Lee
>Assignee: Chen He
>  Labels: BB2015-05-RFC, log
> Fix For: 2.8.0
>
> Attachments: HDFS-6888-2.patch, HDFS-6888-3.patch, HDFS-6888-4.patch, 
> HDFS-6888-5.patch, HDFS-6888-6.patch, HDFS-6888.008.patch, 
> HDFS-6888.07.patch, HDFS-6888.patch
>
>
> The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
> one of the most called method, users have noticed that audit log is now 
> filled with this.  Since we now have HTTP request logging, this seems 
> unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6888) Allow selectively audit logging ops

2015-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544958#comment-14544958
 ] 

Hudson commented on HDFS-6888:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7839 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7839/])
HDFS-6888. Allow selectively audit logging ops (Contributed by Chen He) 
(vinayakumarb: rev 3bef7c80a97709b367781180b2e11fc50653d3c8)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogAtDebug.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java


> Allow selectively audit logging ops 
> 
>
> Key: HDFS-6888
> URL: https://issues.apache.org/jira/browse/HDFS-6888
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.5.0
>Reporter: Kihwal Lee
>Assignee: Chen He
>  Labels: BB2015-05-RFC, log
> Attachments: HDFS-6888-2.patch, HDFS-6888-3.patch, HDFS-6888-4.patch, 
> HDFS-6888-5.patch, HDFS-6888-6.patch, HDFS-6888.008.patch, 
> HDFS-6888.07.patch, HDFS-6888.patch
>
>
> The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
> one of the most called method, users have noticed that audit log is now 
> filled with this.  Since we now have HTTP request logging, this seems 
> unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files

2015-05-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544956#comment-14544956
 ] 

Arpit Agarwal commented on HDFS-8157:
-

bq. If the intention is to release original_reservation_length - 
round_up(new_reservation_length, page_size), wouldn't it be more 
straightforward to just do that? I don't think the overhead of the extra 
subtraction is significant, and it would be a lot easier to follow.
Agreed the extra arithmetic is a non-issue. I coded it up both ways and found 
round down simpler. The other way requires moving rounding logic out or making 
two calls to the cache manager, neither approach felt cleaner.

> Writes to RAM DISK reserve locked memory for block files
> 
>
> Key: HDFS-8157
> URL: https://issues.apache.org/jira/browse/HDFS-8157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, 
> HDFS-8157.03.patch, HDFS-8157.04.patch
>
>
> Per discussion on HDFS-6919, the first step is that writes to RAM disk will 
> reserve locked memory via the FsDatasetCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6888) Allow selectively audit logging ops

2015-05-14 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-6888:

Issue Type: Improvement  (was: Bug)

> Allow selectively audit logging ops 
> 
>
> Key: HDFS-6888
> URL: https://issues.apache.org/jira/browse/HDFS-6888
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.5.0
>Reporter: Kihwal Lee
>Assignee: Chen He
>  Labels: BB2015-05-RFC, log
> Attachments: HDFS-6888-2.patch, HDFS-6888-3.patch, HDFS-6888-4.patch, 
> HDFS-6888-5.patch, HDFS-6888-6.patch, HDFS-6888.008.patch, 
> HDFS-6888.07.patch, HDFS-6888.patch
>
>
> The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
> one of the most called method, users have noticed that audit log is now 
> filled with this.  Since we now have HTTP request logging, this seems 
> unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6888) Allow selectively audit logging ops

2015-05-14 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544945#comment-14544945
 ] 

Vinayakumar B commented on HDFS-6888:
-

Test failure is not related to this jira and its already fixed in HDFS-8371

+1 for latest patch.
Will commit soon.

> Allow selectively audit logging ops 
> 
>
> Key: HDFS-6888
> URL: https://issues.apache.org/jira/browse/HDFS-6888
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Kihwal Lee
>Assignee: Chen He
>  Labels: BB2015-05-RFC, log
> Attachments: HDFS-6888-2.patch, HDFS-6888-3.patch, HDFS-6888-4.patch, 
> HDFS-6888-5.patch, HDFS-6888-6.patch, HDFS-6888.008.patch, 
> HDFS-6888.07.patch, HDFS-6888.patch
>
>
> The audit logging of getFileInfo() was added in HDFS-3733.  Since this is a 
> one of the most called method, users have noticed that audit log is now 
> filled with this.  Since we now have HTTP request logging, this seems 
> unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8345) Storage policy APIs must be exposed via the FileSystem interface

2015-05-14 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544933#comment-14544933
 ] 

Jing Zhao commented on HDFS-8345:
-

Thanks for updating the patch, Arpit! The latest patch looks pretty good to me. 
But looks like still need to fix the javac/javadoc warnings. Apart from that +1.

> Storage policy APIs must be exposed via the FileSystem interface
> 
>
> Key: HDFS-8345
> URL: https://issues.apache.org/jira/browse/HDFS-8345
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: BB2015-05-TBR
> Attachments: HDFS-8345.01.patch, HDFS-8345.02.patch, 
> HDFS-8345.03.patch, HDFS-8345.04.patch, HDFS-8345.05.patch
>
>
> The storage policy APIs are not exposed via FileSystem. Since 
> DistributedFileSystem is tagged as LimitedPrivate we should expose the APIs 
> through FileSystem for use by other applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8350) Remove old webhdfs.xml and other outdated documentation stuff

2015-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544932#comment-14544932
 ] 

Hudson commented on HDFS-8350:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7838 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7838/])
HDFS-8350. Remove old webhdfs.xml and other outdated documentation stuff. 
Contributed by Brahma Reddy Battula. (aajisaka: rev 
ee7beda6e3c640685c02185a76bed56eb85731fa)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/architecture.gif
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/site.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/webhdfs.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/FI-framework.odg
* hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/skinconf.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/docs/releasenotes.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/docs/changes/ChangesFancyStyle.css
* hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/conf/cli.xconf
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/index.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/FI-framework.gif
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/classes/CatalogManager.properties
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/favicon.ico
* hadoop-hdfs-project/hadoop-hdfs/src/main/docs/status.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/core-logo.gif
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/request-identify.jpg
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/hadoop-logo.jpg
* hadoop-hdfs-project/hadoop-hdfs/src/main/docs/changes/ChangesSimpleStyle.css
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/hadoop-logo-big.jpg
* hadoop-hdfs-project/hadoop-hdfs/src/main/docs/changes/changes2html.pl
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/tabs.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/README.txt


> Remove old webhdfs.xml and other outdated documentation stuff
> -
>
> Key: HDFS-8350
> URL: https://issues.apache.org/jira/browse/HDFS-8350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-8350-002-branch-2.patch, HDFS-8350-002.patch, 
> HDFS-8350-002.patch, HDFS-8350-002.patch, HDFS-8350.patch
>
>
> Old style document 
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documenation/content/xdocs/webhdfs.xml
>  is no longer maintenanced and WebHDFS.md is used instead. We can remove 
> webhdfs.xml and other outdated documentation stuff by removing 
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8350) Remove old webhdfs.xml and other outdated documentation stuff

2015-05-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8350:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed 002 patch to trunk and 002-branch-2 patch to branch-2. Thanks 
[~brahmareddy] for creating the patch and [~aw] for the comment.

> Remove old webhdfs.xml and other outdated documentation stuff
> -
>
> Key: HDFS-8350
> URL: https://issues.apache.org/jira/browse/HDFS-8350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HDFS-8350-002-branch-2.patch, HDFS-8350-002.patch, 
> HDFS-8350-002.patch, HDFS-8350-002.patch, HDFS-8350.patch
>
>
> Old style document 
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documenation/content/xdocs/webhdfs.xml
>  is no longer maintenanced and WebHDFS.md is used instead. We can remove 
> webhdfs.xml and other outdated documentation stuff by removing 
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8350) Remove old webhdfs.xml and other outdated documentation stuff

2015-05-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8350:

 Description: Old style document 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documenation/content/xdocs/webhdfs.xml
 is no longer maintenanced and WebHDFS.md is used instead. We can remove 
webhdfs.xml and other outdated documentation stuff by removing 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs directory.  (was: Old style 
document 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documenation/content/xdocs/webhdfs.xml
 is no longer maintenanced and WebHDFS.md is used instead. We can remove 
webhdfs.xml.)
  Labels:   (was: BB2015-05-TBR)
 Summary: Remove old webhdfs.xml and other outdated documentation stuff 
 (was: Remove old webhdfs.xml)
Hadoop Flags: Reviewed

> Remove old webhdfs.xml and other outdated documentation stuff
> -
>
> Key: HDFS-8350
> URL: https://issues.apache.org/jira/browse/HDFS-8350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-8350-002-branch-2.patch, HDFS-8350-002.patch, 
> HDFS-8350-002.patch, HDFS-8350-002.patch, HDFS-8350.patch
>
>
> Old style document 
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documenation/content/xdocs/webhdfs.xml
>  is no longer maintenanced and WebHDFS.md is used instead. We can remove 
> webhdfs.xml and other outdated documentation stuff by removing 
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8350) Remove old webhdfs.xml

2015-05-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544922#comment-14544922
 ] 

Akira AJISAKA commented on HDFS-8350:
-

+1. I executed "git apply -p0 --binary /path/to/patch" and verified 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs directory was removed from trunk 
and branch-2.

> Remove old webhdfs.xml
> --
>
> Key: HDFS-8350
> URL: https://issues.apache.org/jira/browse/HDFS-8350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>  Labels: BB2015-05-TBR
> Attachments: HDFS-8350-002-branch-2.patch, HDFS-8350-002.patch, 
> HDFS-8350-002.patch, HDFS-8350-002.patch, HDFS-8350.patch
>
>
> Old style document 
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documenation/content/xdocs/webhdfs.xml
>  is no longer maintenanced and WebHDFS.md is used instead. We can remove 
> webhdfs.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8345) Storage policy APIs must be exposed via the FileSystem interface

2015-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544887#comment-14544887
 ] 

Hadoop QA commented on HDFS-8345:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 11s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:red}-1{color} | javac |   7m 39s | The applied patch generated  4  
additional warning messages. |
| {color:red}-1{color} | javadoc |   9m 50s | The applied patch generated  3  
additional warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 30s | The applied patch generated  3 
new checkstyle issues (total was 286, now 287). |
| {color:red}-1{color} | whitespace |   0m  2s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m 31s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  23m 15s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 167m 59s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 16s | Tests passed in 
hadoop-hdfs-client. |
| | | 235m 19s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12732993/HDFS-8345.05.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 9a2a955 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10993/artifact/patchprocess/diffJavacWarnings.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10993/artifact/patchprocess/diffJavadocWarnings.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10993/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10993/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10993/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10993/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10993/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10993/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10993/console |


This message was automatically generated.

> Storage policy APIs must be exposed via the FileSystem interface
> 
>
> Key: HDFS-8345
> URL: https://issues.apache.org/jira/browse/HDFS-8345
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: BB2015-05-TBR
> Attachments: HDFS-8345.01.patch, HDFS-8345.02.patch, 
> HDFS-8345.03.patch, HDFS-8345.04.patch, HDFS-8345.05.patch
>
>
> The storage policy APIs are not exposed via FileSystem. Since 
> DistributedFileSystem is tagged as LimitedPrivate we should expose the APIs 
> through FileSystem for use by other applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8407) libhdfs hdfsListDirectory() API has different behavior than documentation

2015-05-14 Thread Juan Yu (JIRA)
Juan Yu created HDFS-8407:
-

 Summary: libhdfs hdfsListDirectory() API has different behavior 
than documentation
 Key: HDFS-8407
 URL: https://issues.apache.org/jira/browse/HDFS-8407
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Juan Yu


The documentation says it returns NULL on error, but it could also return NULL 
when the directory is empty.
/** 
 * hdfsListDirectory - Get list of files/directories for a given
 * directory-path. hdfsFreeFileInfo should be called to deallocate memory. 
 * @param fs The configured filesystem handle.
 * @param path The path of the directory. 
 * @param numEntries Set to the number of files/directories in path.
 * @return Returns a dynamically-allocated array of hdfsFileInfo
 * objects; NULL on error.
 */
{code}
hdfsFileInfo *pathList = NULL; 
...
//Figure out the number of entries in that directory
jPathListSize = (*env)->GetArrayLength(env, jPathList);
if (jPathListSize == 0) {
ret = 0;
goto done;
}
...
if (ret) {
hdfsFreeFileInfo(pathList, jPathListSize);
errno = ret;
return NULL;
}
*numEntries = jPathListSize;
return pathList;

{code}

Either change the implementation to match the doc, or fix the doc to match the 
implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8371) Fix test failure in TestHdfsConfigFields for spanreceiver properties

2015-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544856#comment-14544856
 ] 

Hudson commented on HDFS-8371:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7837 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7837/])
HDFS-8371. Fix test failure in TestHdfsConfigFields for spanreceiver 
properties. Contributed by Ray Chiang. (aajisaka: rev 
cbc01ed08ea36f70afca6112ccdbf7331567070b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestHdfsConfigFields.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix test failure in TestHdfsConfigFields for spanreceiver properties
> 
>
> Key: HDFS-8371
> URL: https://issues.apache.org/jira/browse/HDFS-8371
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: newbie, test
> Fix For: 2.8.0
>
> Attachments: HDFS-8371.001.patch
>
>
> Some new properties got added to hdfs-default.xml.  Update the test to skip 
> these new properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8400) Fix failed TestHdfsConfigFields

2015-05-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HDFS-8400.
-
Resolution: Duplicate

> Fix failed TestHdfsConfigFields
> ---
>
> Key: HDFS-8400
> URL: https://issues.apache.org/jira/browse/HDFS-8400
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Attachments: HDFS-8400.001.patch
>
>
> TestHdfsConfigFields failed for:
> {code}
> hdfs-default.xml has 2 properties missing in  class 
> org.apache.hadoop.hdfs.DFSConfigKeys
>   dfs.htrace.spanreceiver.classes
>   dfs.client.htrace.spanreceiver.classes
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-8400) Fix failed TestHdfsConfigFields

2015-05-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reopened HDFS-8400:
-

> Fix failed TestHdfsConfigFields
> ---
>
> Key: HDFS-8400
> URL: https://issues.apache.org/jira/browse/HDFS-8400
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Attachments: HDFS-8400.001.patch
>
>
> TestHdfsConfigFields failed for:
> {code}
> hdfs-default.xml has 2 properties missing in  class 
> org.apache.hadoop.hdfs.DFSConfigKeys
>   dfs.htrace.spanreceiver.classes
>   dfs.client.htrace.spanreceiver.classes
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8400) Fix failed TestHdfsConfigFields

2015-05-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8400:

   Resolution: Pending Closed
Fix Version/s: (was: 3.0.0)
   Status: Resolved  (was: Patch Available)

Closing this.

> Fix failed TestHdfsConfigFields
> ---
>
> Key: HDFS-8400
> URL: https://issues.apache.org/jira/browse/HDFS-8400
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Attachments: HDFS-8400.001.patch
>
>
> TestHdfsConfigFields failed for:
> {code}
> hdfs-default.xml has 2 properties missing in  class 
> org.apache.hadoop.hdfs.DFSConfigKeys
>   dfs.htrace.spanreceiver.classes
>   dfs.client.htrace.spanreceiver.classes
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8320) Erasure coding: consolidate striping-related terminologies

2015-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544852#comment-14544852
 ] 

Hadoop QA commented on HDFS-8320:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 38s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 37s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 40s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 13s | The patch appears to introduce 6 
new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 190m  5s | Tests failed in hadoop-hdfs. |
| | | 231m 34s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time  
Unsynchronized access at DFSOutputStream.java:89% of time  Unsynchronized 
access at DFSOutputStream.java:[line 146] |
|  |  Possible null pointer dereference of arr$ in 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long)
  Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long)
  Dereferenced at BlockInfoStripedUnderConstruction.java:[line 194] |
|  |  Unread field:field be static?  At ErasureCodingWorker.java:[line 252] |
|  |  Should 
org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker$StripedReader
 be a _static_ inner class?  At ErasureCodingWorker.java:inner class?  At 
ErasureCodingWorker.java:[lines 913-915] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String,
 ECSchema):in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String,
 ECSchema): String.getBytes()  At ErasureCodingZoneManager.java:[line 117] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in
 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):
 new String(byte[])  At ErasureCodingZoneManager.java:[line 81] |
| Failed unit tests | hadoop.tracing.TestTraceAdmin |
|   | hadoop.hdfs.server.blockmanagement.TestBlockInfo |
|   | hadoop.hdfs.TestWriteReadStripedFile |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestRecoverStripedFile |
|   | hadoop.hdfs.server.datanode.TestIncrementalBlockReports |
|   | 
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks |
|   | hadoop.hdfs.server.datanode.TestTriggerBlockReport |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
| Timed out tests | org.apache.hadoop.hdfs.TestDatanodeDeath |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12732984/HDFS-8320-HDFS-7285.00.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / a35936d |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10992/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10992/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10992/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10992/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed 

[jira] [Updated] (HDFS-8371) Fix test failure in TestHdfsConfigFields for spanreceiver properties

2015-05-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8371:

   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~rchiang] for the patch and 
thanks [~iwasakims] for the review.

> Fix test failure in TestHdfsConfigFields for spanreceiver properties
> 
>
> Key: HDFS-8371
> URL: https://issues.apache.org/jira/browse/HDFS-8371
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: newbie, test
> Fix For: 2.8.0
>
> Attachments: HDFS-8371.001.patch
>
>
> Some new properties got added to hdfs-default.xml.  Update the test to skip 
> these new properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7687) Change fsck to support EC files

2015-05-14 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544842#comment-14544842
 ] 

Takanobu Asanuma commented on HDFS-7687:


OK, I agree with you.

> Change fsck to support EC files
> ---
>
> Key: HDFS-7687
> URL: https://issues.apache.org/jira/browse/HDFS-7687
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
> Attachments: HDFS-7687.1.patch, HDFS-7687.2.patch, HDFS-7687.3.patch
>
>
> We need to change fsck so that it can detect "under replicated" and corrupted 
> EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8371) Fix test failure in TestHdfsConfigFields for spanreceiver properties

2015-05-14 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544836#comment-14544836
 ] 

Ray Chiang commented on HDFS-8371:
--

Thanks for the review, Masatake and Akira.

> Fix test failure in TestHdfsConfigFields for spanreceiver properties
> 
>
> Key: HDFS-8371
> URL: https://issues.apache.org/jira/browse/HDFS-8371
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: newbie, test
> Attachments: HDFS-8371.001.patch
>
>
> Some new properties got added to hdfs-default.xml.  Update the test to skip 
> these new properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8406) Lease recovery continually failed

2015-05-14 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544830#comment-14544830
 ] 

Sean Busbey commented on HDFS-8406:
---

I hit this on an HBase cluster a few weeks ago but was never able to track down 
what did it. At the time I presumed I had messed up the HDFS installation and 
just filed HBASE-13540 and HBASE-13602 to make it easier to work around.

I might be able to track down some old logs from HBase hitting it if it'll help.

> Lease recovery continually failed
> -
>
> Key: HDFS-8406
> URL: https://issues.apache.org/jira/browse/HDFS-8406
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Keith Turner
>
> While testing Accumulo on a cluster and killing processes, I ran into a 
> situation where the lease on an accumulo write ahead log in HDFS could not be 
> recovered.   Even restarting HDFS and Accumulo would not fix the problem.
> The following message was seen in an Accumulo tablet server log immediately 
> before the tablet server was killed.
> {noformat}
> 2015-05-14 17:12:37,466 [hdfs.DFSClient] WARN : DFSOutputStream 
> ResponseProcessor exception  for block 
> BP-802741494-10.1.5.6-1431557089849:blk_1073932823_192060
> java.io.IOException: Bad response ERROR for block 
> BP-802741494-10.1.5.6-1431557089849:blk_1073932823_192060 from datanode 
> 10.1.5.9:50010
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:897)
> 2015-05-14 17:12:37,466 [hdfs.DFSClient] WARN : Error Recovery for block 
> BP-802741494-10.1.5.6-1431557089849:blk_1073932823_192060 in pipeline 
> 10.1.5.55:50010, 10.1.5.9:5
> {noformat}
> Before recovering data from a write ahead log, the Accumulo master attempts 
> to recover the lease.   This repeatedly failed with messages like the 
> following.
> {noformat}
> 2015-05-14 17:14:54,301 [recovery.HadoopLogCloser] WARN : Error recovering 
> lease on 
> hdfs://10.1.5.6:1/accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
>  failed to create file 
> /accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2 for 
> DFSClient_NONMAPREDUCE_950713214_16 for client 10.1.5.158 because 
> pendingCreates is non-null but no leases found.
> {noformat}
> Below is some info from the NN logs for the problematic file.
> {noformat}
> [ec2-user@leader2 logs]$ grep 3a731759-3594-4535-8086-245 
> hadoop-ec2-user-namenode-leader2.log 
> 2015-05-14 17:10:46,299 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocateBlock: 
> /accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2. 
> BP-802741494-10.1.5.6-1431557089849 
> blk_1073932823_192060{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-ffe07d7d-0e68-45b8-b3d5-c976f1716481:NORMAL:10.1.5.55:50010|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-6efec702-3f1f-4ec0-a31f-de947e7e6097:NORMAL:10.1.5.9:50010|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-5e27df17-abf8-47df-b4bc-c38d0cd426ea:NORMAL:10.1.5.45:50010|RBW]]}
> 2015-05-14 17:10:46,628 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> fsync: /accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2 for 
> DFSClient_NONMAPREDUCE_-1128465883_16
> 2015-05-14 17:14:49,288 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: recoverLease: [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-1128465883_16, pendingcreates: 1], 
> src=/accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2 from 
> client DFSClient_NONMAPREDUCE_-1128465883_16
> 2015-05-14 17:14:49,288 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-1128465883_16, pendingcreates: 1], 
> src=/accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2
> 2015-05-14 17:14:49,289 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.internalReleaseLease: File 
> /accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2 has not been 
> closed. Lease recovery is in progress. RecoveryId = 192257 for block 
> blk_1073932823_192060{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[DISK]DS-ffe07d7d-0e68-45b8-b3d5-c976f1716481:NORMAL:10.1.5.55:50010|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-6efec702-3f1f-4ec0-a31f-de947e7e6097:NORMAL:10.1.5.9:50010|RBW],
>  
> ReplicaUnderConstruction[[DISK]DS-5e27df17-abf8-47df-b4bc-c38d0cd426ea:NORMAL:10.1.5.45:50010|RBW]]}
> java.lang.IllegalStateException: Failed to finalize INodeFile 
> 3a731759-3594-4535-8086-245eed7cd4c2 since blocks[0] is non-complete, where 
> blocks=[blk_1073932823_192257{blockUCState=COMMITTED, primaryNodeIndex=2, 
> replicas=[ReplicaUnderConstruction[[D

[jira] [Commented] (HDFS-7687) Change fsck to support EC files

2015-05-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544829#comment-14544829
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7687:
---

For the variables, let's keep using terms of replication for the moment.  I 
think it will be more confusing if we change them to some abstract names. 

> Change fsck to support EC files
> ---
>
> Key: HDFS-7687
> URL: https://issues.apache.org/jira/browse/HDFS-7687
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
> Attachments: HDFS-7687.1.patch, HDFS-7687.2.patch, HDFS-7687.3.patch
>
>
> We need to change fsck so that it can detect "under replicated" and corrupted 
> EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8210) Ozone: Implement storage container manager

2015-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544819#comment-14544819
 ] 

Hadoop QA commented on HDFS-8210:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  6s | Pre-patch HDFS-7240 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 48s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 53s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 15s | The applied patch generated  
45 new checkstyle issues (total was 306, now 347). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  6s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 17s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 166m 46s | Tests failed in hadoop-hdfs. |
| | | 210m 45s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.TestFileCreation |
|   | hadoop.hdfs.TestLeaseRecovery |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12732986/HDFS-8210-HDFS-7240.2.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7240 / 15ccd96 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10991/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10991/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10991/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10991/console |


This message was automatically generated.

> Ozone: Implement storage container manager 
> ---
>
> Key: HDFS-8210
> URL: https://issues.apache.org/jira/browse/HDFS-8210
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HDFS-8210-HDFS-7240.1.patch, HDFS-8210-HDFS-7240.2.patch
>
>
> The storage container manager collects datanode heartbeats, manages 
> replication and exposes API to lookup containers. This jira implements 
> storage container manager by re-using the block manager implementation in 
> namenode. This jira provides initial implementation that works with 
> datanodes. The additional protocols will be added in subsequent jiras.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8367) BlockInfoStriped can also receive schema at its creation

2015-05-14 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8367:

Status: Open  (was: Patch Available)

> BlockInfoStriped can also receive schema at its creation
> 
>
> Key: HDFS-8367
> URL: https://issues.apache.org/jira/browse/HDFS-8367
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: EC
> Attachments: HDFS-8367-FYI-v2.patch, HDFS-8367-FYI.patch, 
> HDFS-8367-HDFS-7285-01.patch, HDFS-8367-HDFS-7285-02.patch, HDFS-8367.1.patch
>
>
> {{BlockInfoStriped}} should receive the total information for erasure coding 
> as {{ECSchema}}. This JIRA changes the constructor interface and its 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8367) BlockInfoStriped can also receive schema at its creation

2015-05-14 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8367:

Attachment: HDFS-8367-FYI-v2.patch

Thanks Kai for the update. It looks much better now. I made further changes for 
your reference (not tested). Hope it helps.

> BlockInfoStriped can also receive schema at its creation
> 
>
> Key: HDFS-8367
> URL: https://issues.apache.org/jira/browse/HDFS-8367
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: EC
> Attachments: HDFS-8367-FYI-v2.patch, HDFS-8367-FYI.patch, 
> HDFS-8367-HDFS-7285-01.patch, HDFS-8367-HDFS-7285-02.patch, HDFS-8367.1.patch
>
>
> {{BlockInfoStriped}} should receive the total information for erasure coding 
> as {{ECSchema}}. This JIRA changes the constructor interface and its 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8400) Fix failed TestHdfsConfigFields

2015-05-14 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544809#comment-14544809
 ] 

Liu Shaohui commented on HDFS-8400:
---

OK. Thanks [~ajisakaa]

> Fix failed TestHdfsConfigFields
> ---
>
> Key: HDFS-8400
> URL: https://issues.apache.org/jira/browse/HDFS-8400
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Fix For: 3.0.0
>
> Attachments: HDFS-8400.001.patch
>
>
> TestHdfsConfigFields failed for:
> {code}
> hdfs-default.xml has 2 properties missing in  class 
> org.apache.hadoop.hdfs.DFSConfigKeys
>   dfs.htrace.spanreceiver.classes
>   dfs.client.htrace.spanreceiver.classes
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8371) Fix test failure in TestHdfsConfigFields for spanreceiver properties

2015-05-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544808#comment-14544808
 ] 

Akira AJISAKA commented on HDFS-8371:
-

+1, committing this.

> Fix test failure in TestHdfsConfigFields for spanreceiver properties
> 
>
> Key: HDFS-8371
> URL: https://issues.apache.org/jira/browse/HDFS-8371
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: newbie, test
> Attachments: HDFS-8371.001.patch
>
>
> Some new properties got added to hdfs-default.xml.  Update the test to skip 
> these new properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8400) Fix failed TestHdfsConfigFields

2015-05-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544807#comment-14544807
 ] 

Akira AJISAKA commented on HDFS-8400:
-

By the way, this issue duplicates HDFS-8371 and the patch attached there looks 
good to me. I'll commit the patch to fix this issue.

> Fix failed TestHdfsConfigFields
> ---
>
> Key: HDFS-8400
> URL: https://issues.apache.org/jira/browse/HDFS-8400
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Fix For: 3.0.0
>
> Attachments: HDFS-8400.001.patch
>
>
> TestHdfsConfigFields failed for:
> {code}
> hdfs-default.xml has 2 properties missing in  class 
> org.apache.hadoop.hdfs.DFSConfigKeys
>   dfs.htrace.spanreceiver.classes
>   dfs.client.htrace.spanreceiver.classes
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8400) Fix failed TestHdfsConfigFields

2015-05-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544806#comment-14544806
 ] 

Akira AJISAKA commented on HDFS-8400:
-

Thanks [~liushaohui] for pinging me.
{code}
+xmlPrefixToSkipCompare.add(DFSConfigKeys.DFS_SERVER_HTRACE_PREFIX);
+xmlPrefixToSkipCompare.add(DFSConfigKeys.DFS_CLIENT_HTRACE_PREFIX);
{code}
I'd like to add the full parameter as possible for the test to fail with the 
wrong parameter that has the same prefix.

> Fix failed TestHdfsConfigFields
> ---
>
> Key: HDFS-8400
> URL: https://issues.apache.org/jira/browse/HDFS-8400
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Fix For: 3.0.0
>
> Attachments: HDFS-8400.001.patch
>
>
> TestHdfsConfigFields failed for:
> {code}
> hdfs-default.xml has 2 properties missing in  class 
> org.apache.hadoop.hdfs.DFSConfigKeys
>   dfs.htrace.spanreceiver.classes
>   dfs.client.htrace.spanreceiver.classes
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7687) Change fsck to support EC files

2015-05-14 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544801#comment-14544801
 ] 

Takanobu Asanuma commented on HDFS-7687:


Thanks for your detailed review! I will recreate a patch.

# {quote}
Question: Do we need {{ErasureCodingResult}}s when we support multiple 
{{ECSchema}}s?
{quote}
Expected ec-blocks is calculated per file in this patch, so I think one 
{{ErasureCodingResult}} can treat multiple {{ECSchema}} s. And it will also be 
able to treat {{EC+Contiguous}}.
# As I mentioned in the last comment, I only used the terms of replication for 
variables. Is it no problem, or should I define new variables for EC?

And Thanks for creating a new JIRA and assigning to me!

> Change fsck to support EC files
> ---
>
> Key: HDFS-7687
> URL: https://issues.apache.org/jira/browse/HDFS-7687
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
> Attachments: HDFS-7687.1.patch, HDFS-7687.2.patch, HDFS-7687.3.patch
>
>
> We need to change fsck so that it can detect "under replicated" and corrupted 
> EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8350) Remove old webhdfs.xml

2015-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544795#comment-14544795
 ] 

Hadoop QA commented on HDFS-8350:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733038/HDFS-8350-002.patch |
| Optional Tests | site |
| git revision | trunk / 9a2a955 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10994/console |


This message was automatically generated.

> Remove old webhdfs.xml
> --
>
> Key: HDFS-8350
> URL: https://issues.apache.org/jira/browse/HDFS-8350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>  Labels: BB2015-05-TBR
> Attachments: HDFS-8350-002-branch-2.patch, HDFS-8350-002.patch, 
> HDFS-8350-002.patch, HDFS-8350-002.patch, HDFS-8350.patch
>
>
> Old style document 
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documenation/content/xdocs/webhdfs.xml
>  is no longer maintenanced and WebHDFS.md is used instead. We can remove 
> webhdfs.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files

2015-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544794#comment-14544794
 ] 

Hadoop QA commented on HDFS-8157:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 36s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 9 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 33s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 14s | The applied patch generated  2 
new checkstyle issues (total was 275, now 273). |
| {color:green}+1{color} | whitespace |   0m  6s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  7s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 168m  6s | Tests failed in hadoop-hdfs. |
| | | 210m 57s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12732968/HDFS-8157.04.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 09fe16f |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10990/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10990/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10990/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10990/console |


This message was automatically generated.

> Writes to RAM DISK reserve locked memory for block files
> 
>
> Key: HDFS-8157
> URL: https://issues.apache.org/jira/browse/HDFS-8157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, 
> HDFS-8157.03.patch, HDFS-8157.04.patch
>
>
> Per discussion on HDFS-6919, the first step is that writes to RAM disk will 
> reserve locked memory via the FsDatasetCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8400) Fix failed TestHdfsConfigFields

2015-05-14 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544791#comment-14544791
 ] 

Liu Shaohui commented on HDFS-8400:
---

[~cmccabe] [~ajisakaa] [~yliu]
Could you help to push this? The failed test blocked many issues.


> Fix failed TestHdfsConfigFields
> ---
>
> Key: HDFS-8400
> URL: https://issues.apache.org/jira/browse/HDFS-8400
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Fix For: 3.0.0
>
> Attachments: HDFS-8400.001.patch
>
>
> TestHdfsConfigFields failed for:
> {code}
> hdfs-default.xml has 2 properties missing in  class 
> org.apache.hadoop.hdfs.DFSConfigKeys
>   dfs.htrace.spanreceiver.classes
>   dfs.client.htrace.spanreceiver.classes
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8400) Fix failed TestHdfsConfigFields

2015-05-14 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544788#comment-14544788
 ] 

Liu Shaohui commented on HDFS-8400:
---

The failed test has no relations with this patch.

> Fix failed TestHdfsConfigFields
> ---
>
> Key: HDFS-8400
> URL: https://issues.apache.org/jira/browse/HDFS-8400
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
> Fix For: 3.0.0
>
> Attachments: HDFS-8400.001.patch
>
>
> TestHdfsConfigFields failed for:
> {code}
> hdfs-default.xml has 2 properties missing in  class 
> org.apache.hadoop.hdfs.DFSConfigKeys
>   dfs.htrace.spanreceiver.classes
>   dfs.client.htrace.spanreceiver.classes
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8131) Implement a space balanced block placement policy

2015-05-14 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544787#comment-14544787
 ] 

Liu Shaohui commented on HDFS-8131:
---

Failed tests have no relations with this patch.

> Implement a space balanced block placement policy
> -
>
> Key: HDFS-8131
> URL: https://issues.apache.org/jira/browse/HDFS-8131
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Minor
>  Labels: BlockPlacementPolicy
> Fix For: 3.0.0
>
> Attachments: HDFS-8131-v1.diff, HDFS-8131-v2.diff, HDFS-8131-v3.diff, 
> HDFS-8131.004.patch, balanced.png
>
>
> The default block placement policy will choose datanodes for new blocks 
> randomly, which will result in unbalanced space used percent among datanodes 
> after an cluster expansion. The old datanodes always are in high used percent 
> of space and new added ones are in low percent.
> Through we can used the external balance tool to balance the space used rate, 
> it will cost extra network IO and it's not easy to control the balance speed.
> An easy solution is to implement an balanced block placement policy which 
> will choose low used percent datanodes for new blocks with a little high 
> possibility. In a not long term, the used percent of datanodes will trend to 
> be balanced.
> Suggestions and discussions are welcomed. Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8350) Remove old webhdfs.xml

2015-05-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8350:

Attachment: HDFS-8350-002.patch

Attaching the same patch to run jenkins build.

> Remove old webhdfs.xml
> --
>
> Key: HDFS-8350
> URL: https://issues.apache.org/jira/browse/HDFS-8350
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>  Labels: BB2015-05-TBR
> Attachments: HDFS-8350-002-branch-2.patch, HDFS-8350-002.patch, 
> HDFS-8350-002.patch, HDFS-8350-002.patch, HDFS-8350.patch
>
>
> Old style document 
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documenation/content/xdocs/webhdfs.xml
>  is no longer maintenanced and WebHDFS.md is used instead. We can remove 
> webhdfs.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files

2015-05-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544742#comment-14544742
 ] 

Colin Patrick McCabe commented on HDFS-8157:


Thanks for the clarification.  If the intention is to release 
{{original_reservation_length - round_up(new_reservation_length, page_size)}}, 
wouldn't it be more straightforward to just do that?  I don't think the 
overhead of the extra subtraction is significant, and it would be a lot easier 
to follow.  We would also be able to skip writing all the {{roundDown}} code.

> Writes to RAM DISK reserve locked memory for block files
> 
>
> Key: HDFS-8157
> URL: https://issues.apache.org/jira/browse/HDFS-8157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, 
> HDFS-8157.03.patch, HDFS-8157.04.patch
>
>
> Per discussion on HDFS-6919, the first step is that writes to RAM disk will 
> reserve locked memory via the FsDatasetCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7240) Object store in HDFS

2015-05-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544725#comment-14544725
 ] 

Colin Patrick McCabe commented on HDFS-7240:


Thanks, [~jnp].  June 3rd at 1pm to 3pm sounds good to me.  I believe [~atm] 
and [~zhz] will be able to attend as well.

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8402) Fsck exit codes are not reliable

2015-05-14 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-8402:
---
Hadoop Flags: Incompatible change

Marking this as incompatible as per compatibility guidelines, as it changes the 
output of fsck.

> Fsck exit codes are not reliable
> 
>
> Key: HDFS-8402
> URL: https://issues.apache.org/jira/browse/HDFS-8402
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-8402.patch
>
>
> HDFS-6663 added the ability to check specific blocks.  The exit code is 
> non-deterministically based on the state (corrupt, healthy, etc) of the last 
> displayed block's last storage location - instead of whether any of the 
> checked blocks' storages are corrupt.  Blocks with decommissioning or 
> decommissioned nodes should not be flagged as an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8402) Fsck exit codes are not reliable

2015-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544666#comment-14544666
 ] 

Hadoop QA commented on HDFS-8402:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 31s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 33s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 15s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  3s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 166m 52s | Tests failed in hadoop-hdfs. |
| | | 209m 31s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12732951/HDFS-8402.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 15ccd96 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10987/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10987/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10987/console |


This message was automatically generated.

> Fsck exit codes are not reliable
> 
>
> Key: HDFS-8402
> URL: https://issues.apache.org/jira/browse/HDFS-8402
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-8402.patch
>
>
> HDFS-6663 added the ability to check specific blocks.  The exit code is 
> non-deterministically based on the state (corrupt, healthy, etc) of the last 
> displayed block's last storage location - instead of whether any of the 
> checked blocks' storages are corrupt.  Blocks with decommissioning or 
> decommissioned nodes should not be flagged as an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-05-14 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544653#comment-14544653
 ] 

Aaron T. Myers commented on HDFS-6440:
--

bq. Ah, Ok. Yes, that second set seed will clearly not be used and is 
definitely be misleading. Sorry for being dense :-/ I was just looking at the 
usage of the Random, not the seed!

No sweat. I figured we were talking past each other a bit.

bq. I'm thinking to just pull the better log message up to the static 
initialization and remove the those two lines (4-5).

I agree, this seems like the right move to me. Just have a single seed for the 
whole test class. Possible that we may at some point encounter some inter-test 
dependencies, and if so it'll be nice that there's only a single seed used 
across all the tests, instead of having to manually set several seeds to 
reproduce the same sequence. The fact that we already clearly log which NN is 
becoming active should be sufficient for reproducing individual test failures 
if one wants to do that.

Thanks, Jesse.

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, 
> hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8394) Move getAdditionalBlock() and related functionalities into a separate class

2015-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544654#comment-14544654
 ] 

Hadoop QA commented on HDFS-8394:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  5s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 44s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 53s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 17s | The applied patch generated  
25 new checkstyle issues (total was 476, now 485). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  5s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 167m  8s | Tests failed in hadoop-hdfs. |
| | | 211m  5s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.mover.TestStorageMover |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12732919/HDFS-8394.004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 15ccd96 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10986/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10986/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10986/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10986/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10986/console |


This message was automatically generated.

> Move getAdditionalBlock() and related functionalities into a separate class
> ---
>
> Key: HDFS-8394
> URL: https://issues.apache.org/jira/browse/HDFS-8394
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8394.000.patch, HDFS-8394.001.patch, 
> HDFS-8394.002.patch, HDFS-8394.003.patch, HDFS-8394.004.patch
>
>
> This jira proposes to move the implementation of getAdditionalBlock() and 
> related functionalities to a separate class to open up the possibilities of 
> further refactoring and improvements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8397) Refactor the error handling code in DataStreamer

2015-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544648#comment-14544648
 ] 

Hadoop QA commented on HDFS-8397:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 40s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 32s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 43s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 13s | The applied patch generated  3 
new checkstyle issues (total was 93, now 84). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  8s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 183m 44s | Tests failed in hadoop-hdfs. |
| | | 226m 47s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tools.TestHdfsConfigFields |
| Timed out tests | org.apache.hadoop.hdfs.server.namenode.TestBackupNode |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12732948/h8397_20150514.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 15ccd96 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10985/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10985/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10985/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10985/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10985/console |


This message was automatically generated.

> Refactor the error handling code in DataStreamer
> 
>
> Key: HDFS-8397
> URL: https://issues.apache.org/jira/browse/HDFS-8397
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h8397_20150513.patch, h8397_20150514.patch
>
>
> DataStreamer handles (1) bad datanode, (2) restarting datanode and (3) 
> datanode replacement and keeps various state and indexes.  This issue is to 
> clean up the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-05-14 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544623#comment-14544623
 ] 

Jesse Yates commented on HDFS-6440:
---

Ah, Ok. Yes, that second set seed will clearly not be used and is definitely be 
misleading. Sorry for being dense :-/ I was just looking at the usage of the 
Random, not the seed!

I'm thinking to just pull the better log message up to the static 
initialization and remove the those two lines (4-5). 

I _think_ the original idea was to make it easier to reproduce an individual 
test failures, since each cluster in the methods is managed independently... 
but I don't know if it really matters at this point; it just sucks to have to 
rerun all the tests to debug the single test. Thoughts?

> Support more than 2 NameNodes
> -
>
> Key: HDFS-6440
> URL: https://issues.apache.org/jira/browse/HDFS-6440
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: auto-failover, ha, namenode
>Affects Versions: 2.4.0
>Reporter: Jesse Yates
>Assignee: Jesse Yates
> Fix For: 3.0.0
>
> Attachments: Multiple-Standby-NameNodes_V1.pdf, 
> hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
> hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, 
> hdfs-multiple-snn-trunk-v0.patch
>
>
> Most of the work is already done to support more than 2 NameNodes (one 
> active, one standby). This would be the last bit to support running multiple 
> _standby_ NameNodes; one of the standbys should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some 
> complexity around managing the checkpointing, and updating a whole lot of 
> tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8375) Add cellSize as an XAttr to ECZone

2015-05-14 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544618#comment-14544618
 ] 

Zhe Zhang commented on HDFS-8375:
-

Nice work Vinay! The patch LGTM overall. A few nits:
# Maybe let's take this chance to update {{schema}} to {{ecSchema}} in 
{HdfsFileStatus}}?
# With this change we should probably revisit the relationship 
{{ErasureCodingInfo}} and {{ErasureCodingZoneInfo}}. If {{ErasureCodingInfo}} 
is to represent all EC-related info for a file, then it should include cell 
size, and this structure can be used to encapsulate all required info in places 
like {{StripedBlockUtil}}. 

I haven't yet finished reviewing the {{FSNamesystem}} changes, will complete my 
review later today.

> Add cellSize as an XAttr to ECZone
> --
>
> Key: HDFS-8375
> URL: https://issues.apache.org/jira/browse/HDFS-8375
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8375-HDFS-7285-01.patch
>
>
> Add {{cellSize}} as an Xattr for ECZone. as discussed 
> [here|https://issues.apache.org/jira/browse/HDFS-8347?focusedCommentId=14539108&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14539108]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8404) pending block replication can get stuck using older genstamp

2015-05-14 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544580#comment-14544580
 ] 

Kihwal Lee commented on HDFS-8404:
--

{{getBlockReplication()}} got renamed recently.

> pending block replication can get stuck using older genstamp
> 
>
> Key: HDFS-8404
> URL: https://issues.apache.org/jira/browse/HDFS-8404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: HDFS-8404-v0.patch
>
>
> If an under-replicated block gets into the pending-replication list, but 
> later the  genstamp of that block ends up being newer than the one originally 
> submitted for replication, the block will fail replication until the NN is 
> restarted. 
> It will be safer if processPendingReplications()  gets up-to-date blockinfo 
> before resubmitting replication work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8406) Lease recovery continually failed

2015-05-14 Thread Keith Turner (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544565#comment-14544565
 ] 

Keith Turner commented on HDFS-8406:


Here is the full stack trace from the Accumulo master log.   

{noformat}
2015-05-14 17:14:54,301 [recovery.HadoopLogCloser] WARN : Error recovering 
lease on 
hdfs://10.1.5.6:1/accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
 failed to create file 
/accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2 for 
DFSClient_NONMAPREDUCE_950713214_16 for client 10.1.5.158 because 
pendingCreates is non-null but no leases found.
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3001)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLease(FSNamesystem.java:2955)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.recoverLease(NameNodeRpcServer.java:591)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.recoverLease(ClientNamenodeProtocolServerSideTranslatorPB.java:641)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy15.recoverLease(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.recoverLease(ClientNamenodeProtocolTranslatorPB.java:584)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy16.recoverLease(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.recoverLease(DFSClient.java:1238)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$2.doCall(DistributedFileSystem.java:278)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$2.doCall(DistributedFileSystem.java:274)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.recoverLease(DistributedFileSystem.java:274)
at 
org.apache.accumulo.server.master.recovery.HadoopLogCloser.close(HadoopLogCloser.java:55)
at 
org.apache.accumulo.master.recovery.RecoveryManager$LogSortTask.run(RecoveryManager.java:96)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at 
org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
at java.lang.Thread.run(Thread.java:745)
{noformat}

> Lease recovery continually failed
> -
>
> Key: HDFS-8406
> URL: https://issues.apache.org/jira/browse/HDFS-8406
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Keith Turner
>
> While testing Accumulo on a cluster and killing processes, I ran into a 
> situation where the lease on an accumulo write ahead log in HDFS could not be 
> recovered.   Even restarting HDFS and Accum

[jira] [Commented] (HDFS-1605) Convert DFSInputStream synchronized sections to a ReadWrite lock

2015-05-14 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544555#comment-14544555
 ] 

Ravi Prakash commented on HDFS-1605:


Kartheek! Thanks. I also did not look through all possible cases of new race 
conditions. Did you?

> Convert DFSInputStream synchronized sections to a ReadWrite lock
> 
>
> Key: HDFS-1605
> URL: https://issues.apache.org/jira/browse/HDFS-1605
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: dhruba borthakur
>Assignee: kartheek muthyala
>  Labels: BB2015-05-RFC
> Attachments: DFSClientRWlock.1.txt, DFSClientRWlock.3.txt, 
> HADOOP-1605-trunk-1.patch, HADOOP-1605-trunk.patch, HDFS-1605.txt
>
>
> Hbase does concurrent preads from multiple threads to different blocks of the 
> same hdfs file. Each of these pread calls invoke 
> DFSInputStream.getFileLength() and DFSInputStream.getBlockAt(). These methods 
> are "synchronized", thus causing all the concurrent threads to serialize. It 
> would help performance to convert this to a Read/Write lock



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8406) Lease recovery continually failed

2015-05-14 Thread Keith Turner (JIRA)
Keith Turner created HDFS-8406:
--

 Summary: Lease recovery continually failed
 Key: HDFS-8406
 URL: https://issues.apache.org/jira/browse/HDFS-8406
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Keith Turner


While testing Accumulo on a cluster and killing processes, I ran into a 
situation where the lease on an accumulo write ahead log in HDFS could not be 
recovered.   Even restarting HDFS and Accumulo would not fix the problem.

The following message was seen in an Accumulo tablet server log immediately 
before the tablet server was killed.

{noformat}
2015-05-14 17:12:37,466 [hdfs.DFSClient] WARN : DFSOutputStream 
ResponseProcessor exception  for block 
BP-802741494-10.1.5.6-1431557089849:blk_1073932823_192060
java.io.IOException: Bad response ERROR for block 
BP-802741494-10.1.5.6-1431557089849:blk_1073932823_192060 from datanode 
10.1.5.9:50010
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:897)
2015-05-14 17:12:37,466 [hdfs.DFSClient] WARN : Error Recovery for block 
BP-802741494-10.1.5.6-1431557089849:blk_1073932823_192060 in pipeline 
10.1.5.55:50010, 10.1.5.9:5
{noformat}

Before recovering data from a write ahead log, the Accumulo master attempts to 
recover the lease.   This repeatedly failed with messages like the following.

{noformat}
2015-05-14 17:14:54,301 [recovery.HadoopLogCloser] WARN : Error recovering 
lease on 
hdfs://10.1.5.6:1/accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
 failed to create file 
/accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2 for 
DFSClient_NONMAPREDUCE_950713214_16 for client 10.1.5.158 because 
pendingCreates is non-null but no leases found.
{noformat}

Below is some info from the NN logs for the problematic file.

{noformat}
[ec2-user@leader2 logs]$ grep 3a731759-3594-4535-8086-245 
hadoop-ec2-user-namenode-leader2.log 
2015-05-14 17:10:46,299 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocateBlock: 
/accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2. 
BP-802741494-10.1.5.6-1431557089849 
blk_1073932823_192060{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-ffe07d7d-0e68-45b8-b3d5-c976f1716481:NORMAL:10.1.5.55:50010|RBW],
 
ReplicaUnderConstruction[[DISK]DS-6efec702-3f1f-4ec0-a31f-de947e7e6097:NORMAL:10.1.5.9:50010|RBW],
 
ReplicaUnderConstruction[[DISK]DS-5e27df17-abf8-47df-b4bc-c38d0cd426ea:NORMAL:10.1.5.45:50010|RBW]]}
2015-05-14 17:10:46,628 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: 
/accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2 for 
DFSClient_NONMAPREDUCE_-1128465883_16
2015-05-14 17:14:49,288 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: recoverLease: [Lease.  
Holder: DFSClient_NONMAPREDUCE_-1128465883_16, pendingcreates: 1], 
src=/accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2 from 
client DFSClient_NONMAPREDUCE_-1128465883_16
2015-05-14 17:14:49,288 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
Holder: DFSClient_NONMAPREDUCE_-1128465883_16, pendingcreates: 1], 
src=/accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2
2015-05-14 17:14:49,289 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
NameSystem.internalReleaseLease: File 
/accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2 has not been 
closed. Lease recovery is in progress. RecoveryId = 192257 for block 
blk_1073932823_192060{blockUCState=UNDER_RECOVERY, primaryNodeIndex=2, 
replicas=[ReplicaUnderConstruction[[DISK]DS-ffe07d7d-0e68-45b8-b3d5-c976f1716481:NORMAL:10.1.5.55:50010|RBW],
 
ReplicaUnderConstruction[[DISK]DS-6efec702-3f1f-4ec0-a31f-de947e7e6097:NORMAL:10.1.5.9:50010|RBW],
 
ReplicaUnderConstruction[[DISK]DS-5e27df17-abf8-47df-b4bc-c38d0cd426ea:NORMAL:10.1.5.45:50010|RBW]]}
java.lang.IllegalStateException: Failed to finalize INodeFile 
3a731759-3594-4535-8086-245eed7cd4c2 since blocks[0] is non-complete, where 
blocks=[blk_1073932823_192257{blockUCState=COMMITTED, primaryNodeIndex=2, 
replicas=[ReplicaUnderConstruction[[DISK]DS-ffe07d7d-0e68-45b8-b3d5-c976f1716481:NORMAL:10.1.5.55:50010|RBW],
 
ReplicaUnderConstruction[[DISK]DS-5e27df17-abf8-47df-b4bc-c38d0cd426ea:NORMAL:10.1.5.45:50010|RBW]]}].
2015-05-14 17:14:54,292 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 
on 1, call org.apache.hadoop.hdfs.protocol.ClientProtocol.recoverLease from 
10.1.5.158:53784 Call#529 Retry#0: 
org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to create 
file /accumulo/wal/worker11+9997/3a731759-3594-4535-8086-245eed7cd4c2 for 
DFSClient_NONMAPREDUCE_950713214_16 for client 10.1.5.158 because 
pendingCreates is non-null but no leases found.
2015-05-14

[jira] [Commented] (HDFS-7621) Erasure Coding: update the Balancer/Mover data migration logic

2015-05-14 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544545#comment-14544545
 ] 

Zhe Zhang commented on HDFS-7621:
-

Thanks for the update Walter.

Structural:
# {{convertToBlockWithLocations}} looks good to me know. So I'm +1 on the 
{{BlockManager}} changes
# One concern on the new {{DBlock}} code is that each {{DBlock}} should 
represent a single block with locations. Its super class, 
{{MovedBlocks#Locations}}, is also clearly designed for a single block. 
Therefore {{nonCollocatedBlock}} looks strange because each striped {{DBlock}} 
actually has multiple peer blocks that it has to avoid.
# bq. Balancer handles block and doesn't know file. Balancer gets blocks from 
Node.
Thanks for clarifying!

Nits:
# A few lines are too long. You might already know that we prefer each line to 
be under 80 characters.
{code}
+  updateDBlockLocations(nonCollocatedBlock, 
nonCollocatedBlockWithLocations);
+  DBlock newDBlock(Block block, List locations, DBlock 
stripedBlock) {
+  final ExtendedBlock reportedBlock = new 
ExtendedBlock(lsb.getBlock());
+  long numBytes = 
getInternalBlockLength(lsb.getBlock().getNumBytes(),
+  final List reportedBlockLocation = new  
ArrayList<>(1);
+  public static void verifyLocatedStripedBlocks(LocatedBlocks lbs, int 
groupSize) {
+  client = NameNodeProxies.createProxy(conf, 
cluster.getFileSystem(0).getUri(),
+  LocatedBlocks locatedBlocks = client.getBlockLocations(fileName, 0, 
fileLen);
+  final DBlock db = mover.newDBlock(lb.getBlock().getLocalBlock(), 
locations, null);
+  ClientProtocol client = NameNodeProxies.createProxy(conf, 
cluster.getFileSystem(0).getUri(),
+  client.setStoragePolicy(barDir, 
HdfsServerConstants.HOT_STORAGE_POLICY_NAME);
+  LocatedBlocks locatedBlocks = client.getBlockLocations(fooFile, 0, 
fileLen);
+  DFSTestUtil.verifyLocatedStripedBlocks(locatedBlocks, dataBlocks + 
parityBlocks);
+  DFSTestUtil.verifyLocatedStripedBlocks(locatedBlocks, dataBlocks + 
parityBlocks);
{code}
I usually use a simple script to check for long lines. In case you need:
{code}
#! /usr/local/bin/python3
import sys

with open(sys.argv[1], 'r') as inf:
  for line in inf:
if line.startswith('+ ') and len(line) > 81:
  print(line)
{code}
# Looks like the indentation is wrong:
{code}
+private void updateDBlockLocations(DBlock block,
+   BlockWithLocations blockWithLocations){
{code}
# {{nonCollocatedBlock}} literally means "currently not collocated". Maybe just 
{{toAvoid}}? The name doesn't need to have "block" because the type already 
implies it.


> Erasure Coding: update the Balancer/Mover data migration logic
> --
>
> Key: HDFS-7621
> URL: https://issues.apache.org/jira/browse/HDFS-7621
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Walter Su
>  Labels: HDFS-7285
> Attachments: HDFS-7621.001.patch, HDFS-7621.002.patch, 
> HDFS-7621.003.patch, HDFS-7621.004.patch
>
>
> Currently the Balancer/Mover only considers the distribution of replicas of 
> the same block during data migration: the migration cannot decrease the 
> number of racks. With EC the Balancer and Mover should also take into account 
> the distribution of blocks belonging to the same block group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8345) Storage policy APIs must be exposed via the FileSystem interface

2015-05-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8345:

Attachment: HDFS-8345.05.patch

v5: Rebase to trunk.

> Storage policy APIs must be exposed via the FileSystem interface
> 
>
> Key: HDFS-8345
> URL: https://issues.apache.org/jira/browse/HDFS-8345
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: BB2015-05-TBR
> Attachments: HDFS-8345.01.patch, HDFS-8345.02.patch, 
> HDFS-8345.03.patch, HDFS-8345.04.patch, HDFS-8345.05.patch
>
>
> The storage policy APIs are not exposed via FileSystem. Since 
> DistributedFileSystem is tagged as LimitedPrivate we should expose the APIs 
> through FileSystem for use by other applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8405) Fix a typo in NamenodeFsck

2015-05-14 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8405:
--
Component/s: namenode

> Fix a typo in NamenodeFsck
> --
>
> Key: HDFS-8405
> URL: https://issues.apache.org/jira/browse/HDFS-8405
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
>Priority: Minor
>
> DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY below should not be quoted.
> {code}
>   res.append("\n  
> ").append("DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY:\t")
>  .append(minReplication);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8404) pending block replication can get stuck using older genstamp

2015-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544527#comment-14544527
 ] 

Hadoop QA commented on HDFS-8404:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 47s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:red}-1{color} | javac |   1m 35s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12732974/HDFS-8404-v0.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 09fe16f |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10988/console |


This message was automatically generated.

> pending block replication can get stuck using older genstamp
> 
>
> Key: HDFS-8404
> URL: https://issues.apache.org/jira/browse/HDFS-8404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: HDFS-8404-v0.patch
>
>
> If an under-replicated block gets into the pending-replication list, but 
> later the  genstamp of that block ends up being newer than the one originally 
> submitted for replication, the block will fail replication until the NN is 
> restarted. 
> It will be safer if processPendingReplications()  gets up-to-date blockinfo 
> before resubmitting replication work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8345) Storage policy APIs must be exposed via the FileSystem interface

2015-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544525#comment-14544525
 ] 

Hadoop QA commented on HDFS-8345:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12732978/HDFS-8345.04.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 09fe16f |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10989/console |


This message was automatically generated.

> Storage policy APIs must be exposed via the FileSystem interface
> 
>
> Key: HDFS-8345
> URL: https://issues.apache.org/jira/browse/HDFS-8345
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: BB2015-05-TBR
> Attachments: HDFS-8345.01.patch, HDFS-8345.02.patch, 
> HDFS-8345.03.patch, HDFS-8345.04.patch
>
>
> The storage policy APIs are not exposed via FileSystem. Since 
> DistributedFileSystem is tagged as LimitedPrivate we should expose the APIs 
> through FileSystem for use by other applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7687) Change fsck to support EC files

2015-05-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544513#comment-14544513
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7687:
---

BTW, this is a bug.  The following
{code}
  res.append("\n  
").append("DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY:\t")
 .append(minReplication);
{code}
should be
{code}
  res.append("\n  
").append(DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY + ":\t")
 .append(minReplication);
{code}
Let fix it in trunk first.  See if you also want to do some code refactoring in 
truck.  Filed HDFS-8405 and assigned to you.

> Change fsck to support EC files
> ---
>
> Key: HDFS-7687
> URL: https://issues.apache.org/jira/browse/HDFS-7687
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
> Attachments: HDFS-7687.1.patch, HDFS-7687.2.patch, HDFS-7687.3.patch
>
>
> We need to change fsck so that it can detect "under replicated" and corrupted 
> EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8405) Fix a typo in NamenodeFsck

2015-05-14 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-8405:
-

 Summary: Fix a typo in NamenodeFsck
 Key: HDFS-8405
 URL: https://issues.apache.org/jira/browse/HDFS-8405
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tsz Wo Nicholas Sze
Assignee: Takanobu Asanuma
Priority: Minor


DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY below should not be quoted.
{code}
  res.append("\n  
").append("DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY:\t")
 .append(minReplication);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8210) Ozone: Implement storage container manager

2015-05-14 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-8210:
---
Status: Patch Available  (was: Open)

Updated patch with rebased branch. Fixes the findbugs warnings.

> Ozone: Implement storage container manager 
> ---
>
> Key: HDFS-8210
> URL: https://issues.apache.org/jira/browse/HDFS-8210
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HDFS-8210-HDFS-7240.1.patch, HDFS-8210-HDFS-7240.2.patch
>
>
> The storage container manager collects datanode heartbeats, manages 
> replication and exposes API to lookup containers. This jira implements 
> storage container manager by re-using the block manager implementation in 
> namenode. This jira provides initial implementation that works with 
> datanodes. The additional protocols will be added in subsequent jiras.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8210) Ozone: Implement storage container manager

2015-05-14 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-8210:
---
Attachment: HDFS-8210-HDFS-7240.2.patch

> Ozone: Implement storage container manager 
> ---
>
> Key: HDFS-8210
> URL: https://issues.apache.org/jira/browse/HDFS-8210
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HDFS-8210-HDFS-7240.1.patch, HDFS-8210-HDFS-7240.2.patch
>
>
> The storage container manager collects datanode heartbeats, manages 
> replication and exposes API to lookup containers. This jira implements 
> storage container manager by re-using the block manager implementation in 
> namenode. This jira provides initial implementation that works with 
> datanodes. The additional protocols will be added in subsequent jiras.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7745) HDFS should have its own daemon command and not rely on the one in common

2015-05-14 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7745.

Resolution: Pending Closed

Yeah, I'm guessing Sanjay opened this not knowing about all the work happening 
in trunk to generalize all of this stuff, deprecating all of the daemon 
commands, etc, etc.

If he feels differently, he can always re-open.

> HDFS should have its own daemon command  and not rely on the one in common
> --
>
> Key: HDFS-7745
> URL: https://issues.apache.org/jira/browse/HDFS-7745
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sanjay Radia
>
> HDFS should have its own daemon command and not rely on the one in common.  
> BTW Yarn split out its own daemon command during project split. Note the 
> hdfs-command does have --daemon flag and hence the daemon script is merely a 
> wrapper. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8210) Ozone: Implement storage container manager

2015-05-14 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-8210:
---
Status: Open  (was: Patch Available)

> Ozone: Implement storage container manager 
> ---
>
> Key: HDFS-8210
> URL: https://issues.apache.org/jira/browse/HDFS-8210
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HDFS-8210-HDFS-7240.1.patch
>
>
> The storage container manager collects datanode heartbeats, manages 
> replication and exposes API to lookup containers. This jira implements 
> storage container manager by re-using the block manager implementation in 
> namenode. This jira provides initial implementation that works with 
> datanodes. The additional protocols will be added in subsequent jiras.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7316) DistCp cannot handle ":" colon in filename

2015-05-14 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-7316.
-
Resolution: Duplicate

This does look like a duplicate of HADOOP-3257.  I'm resolving.

> DistCp cannot handle ":" colon in filename
> --
>
> Key: HDFS-7316
> URL: https://issues.apache.org/jira/browse/HDFS-7316
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Paul Joslin
>
> Similar to HDFS-13.  If a source directory for distCP contains a file with a 
> colon ":", the file will not be copied.
> Example error message:
> java.lang.Exception: java.lang.IllegalArgumentException: Pathname 
> /user/pk1/RECORDS/MasterLink-pk1.gateway2.example.com:22.10:22:30 from 
> hdfs:/access01.mgt.gateway2.example.com:8020/user/pk1/RECORDS/MasterLink-pk1.gateway2.example.com:22.10:22:30
>  is not a valid DFS filename.
> at 
> org.apache.hadoop.mapred.example.comJobRunner$Job.runTasks(LocalJobRunner.java:462)
> at 
> org.apache.hadoop.mapred.example.comJobRunner$Job.run(LocalJobRunner.java:522)
> Caused by: java.lang.IllegalArgumentException: Pathname 
> /user/pk1/RECORDS/MasterLink-pxj29.gateway2.example.com:22.10:22:30 from 
> hdfs:/access01.mgt.gateway2.example.com:8020/user/pk1/RECORDS/MasterLink-pk1.gateway2.example.com:22.10:22:30
>  is not a valid DFS filename.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:195)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:104)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1079)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1075)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1075)
> at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
> at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at 
> org.apache.hadoop.mapred.example.comJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7687) Change fsck to support EC files

2015-05-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544498#comment-14544498
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7687:
---


- Question: Do we need {{ErasureCodingResult}}s when we support multiple 
{{ECSchema}}s?

- Some suggestion on the terms:
||Replication||Erasure Coding||
| block | block group |
| replica | ec-block |
| UNDER MIN REPL'D BLOCKS | UNRECOVERABLE BLOCK GROUPS |
| DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY | MIN REQUIRED EC BLOCK# |
| Minimally replicated blocks | Minimally erasure-coded block groups |
| Over-replicated blocks | Over-erasure-coded block groups |
| Under-replicated blocks | Under-erasure-coded block groups |
| Mis-replicated blocks | Unsatisfactory placement block groups |
| Default replication factor | Default schema |
| Average block replication | Average block group size |
| Missing replicas | Missing ec-blocks |
| Decommissioned Replicas | Decommissioned ec-blocks |
| Decommissioning Replicas | Decommissioning ec-blocks |

- It is good to add two new classes ReplicationResult and ErasureCodingResult.  
Then, we can rename AbstractResult back to Result.
- minReplication should remain final.  The subclasses can initialize it by 
super constructor, i.e.
{code}
  static abstract class Result {
...

final int minReplication;

Result(int minReplication) {
  this.minReplication = minReplication;
}

...
  }

  @VisibleForTesting
  static class ReplicationResult extends Result {
final short replication;

ReplicationResult(Configuration conf) {
  super(conf.getInt(DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_KEY,
DFSConfigKeys.DFS_NAMENODE_REPLICATION_MIN_DEFAULT));
  this.replication = (short)conf.getInt(DFSConfigKeys.DFS_REPLICATION_KEY,

DFSConfigKeys.DFS_REPLICATION_DEFAULT);
}
...
  }

  @VisibleForTesting
  static class ErasureCodingResult extends Result {
final String ecSchema;

ErasureCodingResult(Configuration conf) {
  this(ErasureCodingSchemaManager.getSystemDefaultSchema());
}

ErasureCodingResult(ECSchema ecSchema) {
  super(ecSchema.getNumDataUnits());
  this.ecSchema = ecSchema.getSchemaName();
}

...
  }
{code}
- The check method can be simplified as below:
{code}
final Result r = file.getReplication() == 0? ecRes: replRes; 
collectFileSummary(path, file, r, blocks);
if (showprogress && (replRes.totalFiles + ecRes.totalFiles) % 100 == 0) {
  out.println();
  out.flush();
}
collectBlocksSummary(parent, file, r, blocks);
{code}


> Change fsck to support EC files
> ---
>
> Key: HDFS-7687
> URL: https://issues.apache.org/jira/browse/HDFS-7687
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
> Attachments: HDFS-7687.1.patch, HDFS-7687.2.patch, HDFS-7687.3.patch
>
>
> We need to change fsck so that it can detect "under replicated" and corrupted 
> EC files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7745) HDFS should have its own daemon command and not rely on the one in common

2015-05-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544490#comment-14544490
 ] 

Chris Nauroth commented on HDFS-7745:
-

[~aw], do you think we can close this one out now that we have the generalized 
daemonization support in the shell script rewrite?

> HDFS should have its own daemon command  and not rely on the one in common
> --
>
> Key: HDFS-7745
> URL: https://issues.apache.org/jira/browse/HDFS-7745
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sanjay Radia
>
> HDFS should have its own daemon command and not rely on the one in common.  
> BTW Yarn split out its own daemon command during project split. Note the 
> hdfs-command does have --daemon flag and hence the daemon script is merely a 
> wrapper. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8394) Move getAdditionalBlock() and related functionalities into a separate class

2015-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544474#comment-14544474
 ] 

Hadoop QA commented on HDFS-8394:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 51s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 40s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 50s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 17s | The applied patch generated  
25 new checkstyle issues (total was 476, now 485). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  7s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 20s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 167m 33s | Tests failed in hadoop-hdfs. |
| | | 211m 13s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestEncryptionZones |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12732919/HDFS-8394.004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 15ccd96 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10983/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10983/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10983/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10983/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10983/console |


This message was automatically generated.

> Move getAdditionalBlock() and related functionalities into a separate class
> ---
>
> Key: HDFS-8394
> URL: https://issues.apache.org/jira/browse/HDFS-8394
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8394.000.patch, HDFS-8394.001.patch, 
> HDFS-8394.002.patch, HDFS-8394.003.patch, HDFS-8394.004.patch
>
>
> This jira proposes to move the implementation of getAdditionalBlock() and 
> related functionalities to a separate class to open up the possibilities of 
> further refactoring and improvements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8345) Storage policy APIs must be exposed via the FileSystem interface

2015-05-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8345:

Attachment: HDFS-8345.04.patch

Fixed failed tests. Remaining Jenkins issues look like false positives.

> Storage policy APIs must be exposed via the FileSystem interface
> 
>
> Key: HDFS-8345
> URL: https://issues.apache.org/jira/browse/HDFS-8345
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>  Labels: BB2015-05-TBR
> Attachments: HDFS-8345.01.patch, HDFS-8345.02.patch, 
> HDFS-8345.03.patch, HDFS-8345.04.patch
>
>
> The storage policy APIs are not exposed via FileSystem. Since 
> DistributedFileSystem is tagged as LimitedPrivate we should expose the APIs 
> through FileSystem for use by other applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7608) hdfs dfsclient newConnectedPeer has no write timeout

2015-05-14 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544464#comment-14544464
 ] 

Esteban Gutierrez commented on HDFS-7608:
-

[~cnauroth] I tried the approach using DFSClient.getDatanodeWriteTimeout() and 
doesn't seem there is a clear way for users to use the timeout, e.g. if you 
pass  a timeout of 1000ms then your final timeout will be 11000ms and that 
won't be straightforward. I think if you pass the socketTimeout is much more 
straightforward. More suggestions to address this are welcome [~cnauroth].

> hdfs dfsclient  newConnectedPeer has no write timeout
> -
>
> Key: HDFS-7608
> URL: https://issues.apache.org/jira/browse/HDFS-7608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs, hdfs-client
>Affects Versions: 2.3.0, 2.6.0
> Environment: hdfs 2.3.0  hbase 0.98.6
>Reporter: zhangshilong
>Assignee: Xiaoyu Yao
> Attachments: HDFS-7608.0.patch, HDFS-7608.1.patch, HDFS-7608.2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read 
> or write timeout. 
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the 
> problem,but did not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no 
> need to force adding timeout by themselives. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8404) pending block replication can get stuck using older genstamp

2015-05-14 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated HDFS-8404:
-
Status: Patch Available  (was: Open)

> pending block replication can get stuck using older genstamp
> 
>
> Key: HDFS-8404
> URL: https://issues.apache.org/jira/browse/HDFS-8404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0, 2.6.0
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: HDFS-8404-v0.patch
>
>
> If an under-replicated block gets into the pending-replication list, but 
> later the  genstamp of that block ends up being newer than the one originally 
> submitted for replication, the block will fail replication until the NN is 
> restarted. 
> It will be safer if processPendingReplications()  gets up-to-date blockinfo 
> before resubmitting replication work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8404) pending block replication can get stuck using older genstamp

2015-05-14 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated HDFS-8404:
-
Attachment: HDFS-8404-v0.patch

> pending block replication can get stuck using older genstamp
> 
>
> Key: HDFS-8404
> URL: https://issues.apache.org/jira/browse/HDFS-8404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: HDFS-8404-v0.patch
>
>
> If an under-replicated block gets into the pending-replication list, but 
> later the  genstamp of that block ends up being newer than the one originally 
> submitted for replication, the block will fail replication until the NN is 
> restarted. 
> It will be safer if processPendingReplications()  gets up-to-date blockinfo 
> before resubmitting replication work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8403) Eliminate retries in TestFileCreation#testOverwriteOpenForWrite

2015-05-14 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544416#comment-14544416
 ] 

Haohui Mai commented on HDFS-8403:
--

{code}
+  // Create a NN proxy without retries for testing.
+  public static final String  DFS_CLIENT_TEST_NO_PROXY_RETRIES = 
"dfs.client.test.no.proxy.retries";
+  public static final boolean DFS_CLIENT_TEST_NO_PROXY_RETRIES_DEFAULT = false;
{code}

We can either add {{VisibleForTesting}} annotation or some comments showing 
that it is only for testing thus we have no compatibility guarantees here, or 
going for a more fundamental fixes which requires removing the retries in 
NameNodeProxies  (or making the number of retries in nonHA case configurable).

Either approach looks good to me. +1 once is addressed.

> Eliminate retries in TestFileCreation#testOverwriteOpenForWrite
> ---
>
> Key: HDFS-8403
> URL: https://issues.apache.org/jira/browse/HDFS-8403
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8403.01.patch, HDFS-8403.02.patch
>
>
> TestFileCreation#testOverwriteOpenForWrite attempts 5 retries to verify the 
> failure case.
> The retries are not necessary to verify correct behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8404) pending block replication can get stuck using older genstamp

2015-05-14 Thread Nathan Roberts (JIRA)
Nathan Roberts created HDFS-8404:


 Summary: pending block replication can get stuck using older 
genstamp
 Key: HDFS-8404
 URL: https://issues.apache.org/jira/browse/HDFS-8404
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0, 2.6.0
Reporter: Nathan Roberts
Assignee: Nathan Roberts


If an under-replicated block gets into the pending-replication list, but later 
the  genstamp of that block ends up being newer than the one originally 
submitted for replication, the block will fail replication until the NN is 
restarted. 

It will be safer if processPendingReplications()  gets up-to-date blockinfo 
before resubmitting replication work.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7559) Create unit test to automatically compare HDFS related classes and hdfs-default.xml

2015-05-14 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-7559:
--
Issue Type: Test  (was: Improvement)

> Create unit test to automatically compare HDFS related classes and 
> hdfs-default.xml
> ---
>
> Key: HDFS-7559
> URL: https://issues.apache.org/jira/browse/HDFS-7559
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-7559.001.patch, HDFS-7559.002.patch, 
> HDFS-7559.003.patch, HDFS-7559.004.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> HDFS related classes and hdfs-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7559) Create unit test to automatically compare HDFS related classes and hdfs-default.xml

2015-05-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544368#comment-14544368
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7559:
---

Okay, the compilation problem is already fixed by HDFS-8362.  But why the new 
test is failing in trunk?

> Create unit test to automatically compare HDFS related classes and 
> hdfs-default.xml
> ---
>
> Key: HDFS-7559
> URL: https://issues.apache.org/jira/browse/HDFS-7559
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-7559.001.patch, HDFS-7559.002.patch, 
> HDFS-7559.003.patch, HDFS-7559.004.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> HDFS related classes and hdfs-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files

2015-05-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544369#comment-14544369
 ] 

Arpit Agarwal commented on HDFS-8157:
-

A couple of the failed tests need fixing. Will post a new patch later today.

> Writes to RAM DISK reserve locked memory for block files
> 
>
> Key: HDFS-8157
> URL: https://issues.apache.org/jira/browse/HDFS-8157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, 
> HDFS-8157.03.patch
>
>
> Per discussion on HDFS-6919, the first step is that writes to RAM disk will 
> reserve locked memory via the FsDatasetCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8320) Erasure coding: consolidate striping-related terminologies

2015-05-14 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8320:

Attachment: HDFS-8320-HDFS-7285.00.patch

Since HDFS-7678 has already defined many striping-related terms, this patch 
includes the following main changes:
# Eliminated {{ReadPortion}} and merged the logic into {{VerticalRange}} and 
{{StripingChunk}}
# Extensive testing of {{StripedBlockUtil}}.
# Javadoc improvements

> Erasure coding: consolidate striping-related terminologies
> --
>
> Key: HDFS-8320
> URL: https://issues.apache.org/jira/browse/HDFS-8320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8320-HDFS-7285.00.patch
>
>
> Right now we are doing striping-based I/O in a number of places:
> # Client output stream (HDFS-7889)
> # Client input stream
> #* pread (HDFS-7782, HDFS-7678)
> #* stateful read (HDFS-8033, HDFS-8281, HDFS-8319)
> # DN reconstruction (HDFS-7348)
> In each place we use one or multiple of the following terminologies:
> # Cell
> # Stripe
> # Block group
> # Internal block
> # Chunk
> This JIRA aims to systematically define these terminologies in relation with 
> each other and in the context of the containing file. For example, a cell 
> belong to stripe _i_ and internal block _j_ can be indexed as {{(i, j)}} and 
> its logical index _k_ in the file can be calculated.
> With the above consolidation, hopefully we can further consolidate striping 
> I/O codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7608) hdfs dfsclient newConnectedPeer has no write timeout

2015-05-14 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544376#comment-14544376
 ] 

Esteban Gutierrez commented on HDFS-7608:
-

Thanks [~cnauroth]. I think just changing newConnectedPeer() to use the one 
provided by  DFSClient.getDatanodeWriteTimeout() is good enough. In the 
DataStreamer we already use it and but we have set the number of nodes to 2. 
That should be fine and that gives at least the flexibility to tune it down if 
required. Also, I see that if we don't set a write timeout we can run into the 
issue that was mentioned in this JIRA and after adding the timeout in the peer 
I no longer experience this issue. I've noticed other issues like in Client() 
where we set up the connection and then the timeout but that can be addressed 
in another JIRA.

> hdfs dfsclient  newConnectedPeer has no write timeout
> -
>
> Key: HDFS-7608
> URL: https://issues.apache.org/jira/browse/HDFS-7608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs, hdfs-client
>Affects Versions: 2.3.0, 2.6.0
> Environment: hdfs 2.3.0  hbase 0.98.6
>Reporter: zhangshilong
>Assignee: Xiaoyu Yao
> Attachments: HDFS-7608.0.patch, HDFS-7608.1.patch, HDFS-7608.2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> problem:
> hbase compactSplitThread may lock forever on  read datanode blocks.
> debug found:  epollwait timeout set to 0,so epollwait can not  run out.
> cause: in hdfs 2.3.0
> hbase using DFSClient to read and write blocks.
> DFSClient  creates one socket using newConnectedPeer(addr), but has no read 
> or write timeout. 
> in v 2.6.0,  newConnectedPeer has added readTimeout to deal with the 
> problem,but did not add writeTimeout. why did not add write Timeout?
> I think NioInetPeer need a default socket timeout,so appalications will no 
> need to force adding timeout by themselives. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8403) Eliminate retries in TestFileCreation#testOverwriteOpenForWrite

2015-05-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8403:

Attachment: HDFS-8403.03.patch

Thanks for the review. The comment does say they are for testing. The v3 patch 
also adds annotations as suggested.

> Eliminate retries in TestFileCreation#testOverwriteOpenForWrite
> ---
>
> Key: HDFS-8403
> URL: https://issues.apache.org/jira/browse/HDFS-8403
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8403.01.patch, HDFS-8403.02.patch, 
> HDFS-8403.03.patch
>
>
> TestFileCreation#testOverwriteOpenForWrite attempts 5 retries to verify the 
> failure case.
> The retries are not necessary to verify correct behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files

2015-05-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8157:

Attachment: HDFS-8157.04.patch

Failed tests need additional configuration.

Also fix one checkstyle issue flagged by Jenkins, the rest are false positives.

> Writes to RAM DISK reserve locked memory for block files
> 
>
> Key: HDFS-8157
> URL: https://issues.apache.org/jira/browse/HDFS-8157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, 
> HDFS-8157.03.patch, HDFS-8157.04.patch
>
>
> Per discussion on HDFS-6919, the first step is that writes to RAM disk will 
> reserve locked memory via the FsDatasetCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7559) Create unit test to automatically compare HDFS related classes and hdfs-default.xml

2015-05-14 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544398#comment-14544398
 ] 

Ray Chiang commented on HDFS-7559:
--

HDFS-8371.  The problem with these checks is chasing down every time someone 
adds a new property in one place and not the corresponding other place *before* 
the check gets committed.  In this case, it looks like it all happened around 
the same time.

> Create unit test to automatically compare HDFS related classes and 
> hdfs-default.xml
> ---
>
> Key: HDFS-7559
> URL: https://issues.apache.org/jira/browse/HDFS-7559
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-7559.001.patch, HDFS-7559.002.patch, 
> HDFS-7559.003.patch, HDFS-7559.004.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> HDFS related classes and hdfs-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8320) Erasure coding: consolidate striping-related terminologies

2015-05-14 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8320:

Target Version/s: HDFS-7285
  Status: Patch Available  (was: In Progress)

> Erasure coding: consolidate striping-related terminologies
> --
>
> Key: HDFS-8320
> URL: https://issues.apache.org/jira/browse/HDFS-8320
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>
> Right now we are doing striping-based I/O in a number of places:
> # Client output stream (HDFS-7889)
> # Client input stream
> #* pread (HDFS-7782, HDFS-7678)
> #* stateful read (HDFS-8033, HDFS-8281, HDFS-8319)
> # DN reconstruction (HDFS-7348)
> In each place we use one or multiple of the following terminologies:
> # Cell
> # Stripe
> # Block group
> # Internal block
> # Chunk
> This JIRA aims to systematically define these terminologies in relation with 
> each other and in the context of the containing file. For example, a cell 
> belong to stripe _i_ and internal block _j_ can be indexed as {{(i, j)}} and 
> its logical index _k_ in the file can be calculated.
> With the above consolidation, hopefully we can further consolidate striping 
> I/O codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7559) Create unit test to automatically compare HDFS related classes and hdfs-default.xml

2015-05-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544351#comment-14544351
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7559:
---

Hi, have you tried to run the new test yourself?
{code}
Running org.apache.hadoop.hdfs.tools.TestHdfsConfigFields
Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.673 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.tools.TestHdfsConfigFields
testCompareXmlAgainstConfigurationClass(org.apache.hadoop.hdfs.tools.TestHdfsConfigFields)
  Time elapsed: 0.526 sec  <<< FAILURE!
java.lang.AssertionError: hdfs-default.xml has 2 properties missing in  class 
org.apache.hadoop.hdfs.DFSConfigKeys
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:468)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.098 sec <<< 
FAILURE! - in org.apache.hadoop.tools.TestHdfsConfigFields
testCompareXmlAgainstConfigurationClass(org.apache.hadoop.tools.TestHdfsConfigFields)
  Time elapsed: 0.009 sec  <<< ERROR!
java.lang.Error: Unresolved compilation problem: 
The declared package "org.apache.hadoop.hdfs.tools" does not match the 
expected package "org.apache.hadoop.tools"

at 
org.apache.hadoop.tools.TestHdfsConfigFields.(TestHdfsConfigFields.java:19)

testCompareConfigurationClassAgainstXml(org.apache.hadoop.tools.TestHdfsConfigFields)
  Time elapsed: 0 sec  <<< ERROR!
java.lang.Error: Unresolved compilation problem: 
The declared package "org.apache.hadoop.hdfs.tools" does not match the 
expected package "org.apache.hadoop.tools"

at 
org.apache.hadoop.tools.TestHdfsConfigFields.(TestHdfsConfigFields.java:19)
{code}

> Create unit test to automatically compare HDFS related classes and 
> hdfs-default.xml
> ---
>
> Key: HDFS-7559
> URL: https://issues.apache.org/jira/browse/HDFS-7559
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-7559.001.patch, HDFS-7559.002.patch, 
> HDFS-7559.003.patch, HDFS-7559.004.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> HDFS related classes and hdfs-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8403) Eliminate retries in TestFileCreation#testOverwriteOpenForWrite

2015-05-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8403:

Attachment: HDFS-8403.02.patch

v2 patch to remove some unintentional whitespace-only changes.

> Eliminate retries in TestFileCreation#testOverwriteOpenForWrite
> ---
>
> Key: HDFS-8403
> URL: https://issues.apache.org/jira/browse/HDFS-8403
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8403.01.patch, HDFS-8403.02.patch
>
>
> TestFileCreation#testOverwriteOpenForWrite attempts 5 retries to verify the 
> failure case.
> The retries are not necessary to verify correct behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8381) Reduce time taken for complete HDFS unit test run

2015-05-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544341#comment-14544341
 ] 

Arpit Agarwal commented on HDFS-8381:
-

Thanks, HDFS-8403 fixes {{TestFileCreation#testOverwriteOpenForWrite}}.

> Reduce time taken for complete HDFS unit test run
> -
>
> Key: HDFS-8381
> URL: https://issues.apache.org/jira/browse/HDFS-8381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Arpit Agarwal
>
> HDFS unit tests take a long time to run. Our unit tests are more like 
> system/integration tests since we spin up a MiniDFSCluster for individual 
> test cases. A number of tests have sleeps which further adds to the run time.
> A better option is to use more fine-grained unit tests specific to individual 
> classes. I did not find any existing Jiras for this so filing one to track 
> this work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-05-14 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544340#comment-14544340
 ] 

Jakob Homan commented on HDFS-8180:
---

Hey Santhosh-
   Thanks for the update.  A couple minor changes and I'll be ready to commit 
this:

* The two tests have quite a lot (say 70%) of duplicated code.  Can we have 
TestSWebHdfsFileContextMainOperations extend 
TestWebHdfsFileContextMainOperations?  This is particularly true since there 
are comments about the need to add more testing/verification on WebHDFS versus 
HDFS behavior in both tests.  Not sure if you'll run into problems with the 
JUnit {{@BeforeClass}} annotations.  If so, just go ahead and split out the 
common code to another, non-test class.
* The comments about WebHDFS/HDFS behavior are formatted incorrectly and run 
into their function definitions:
{noformat}  @Test
  /** Test FileContext APIs when symlinks are not supported
   * TODO: Open separate JIRA for full support of the Symlink in webhdfs
   * */ public void testUnsupportedSymlink() throws IOException {{noformat}
* Typo: clusterSetupAtBegining > clusterSetupAtBeginning



> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8403) Eliminate retries in TestFileCreation#testOverwriteOpenForWrite

2015-05-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8403:

Attachment: HDFS-8403.01.patch

Add a test-only config to skip create retries in the NN client proxy.

Little ugly so I am open to better suggestions. Cuts the test runtime from ~300 
to 3 seconds for me.

I didn't look at the other test cases in the same class. They may benefit from 
a similar fix.

> Eliminate retries in TestFileCreation#testOverwriteOpenForWrite
> ---
>
> Key: HDFS-8403
> URL: https://issues.apache.org/jira/browse/HDFS-8403
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8403.01.patch
>
>
> TestFileCreation#testOverwriteOpenForWrite attempts 5 retries to verify the 
> failure case.
> The retries are not necessary to verify correct behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8403) Eliminate retries in TestFileCreation#testOverwriteOpenForWrite

2015-05-14 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-8403:
---

 Summary: Eliminate retries in 
TestFileCreation#testOverwriteOpenForWrite
 Key: HDFS-8403
 URL: https://issues.apache.org/jira/browse/HDFS-8403
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


TestFileCreation#testOverwriteOpenForWrite attempts 5 retries to verify the 
failure case.

The retries are not necessary to verify correct behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files

2015-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544331#comment-14544331
 ] 

Hadoop QA commented on HDFS-8157:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 59s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 8 new or modified test files. |
| {color:green}+1{color} | javac |   7m 33s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 12s | The applied patch generated  3 
new checkstyle issues (total was 276, now 275). |
| {color:red}-1{color} | whitespace |   0m  6s | The patch has 2  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  4s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 167m 26s | Tests failed in hadoop-hdfs. |
| | | 210m 48s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12732899/HDFS-8157.03.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 05ff54c |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/10982/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10982/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10982/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10982/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10982/console |


This message was automatically generated.

> Writes to RAM DISK reserve locked memory for block files
> 
>
> Key: HDFS-8157
> URL: https://issues.apache.org/jira/browse/HDFS-8157
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, 
> HDFS-8157.03.patch
>
>
> Per discussion on HDFS-6919, the first step is that writes to RAM disk will 
> reserve locked memory via the FsDatasetCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8246) Get HDFS file name based on block pool id and block id

2015-05-14 Thread feng xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544325#comment-14544325
 ] 

feng xu commented on HDFS-8246:
---

I don’t see in-memory structure that maps a block to snapshots directly, and it 
may take multiple round trips to get snapshots for a given block, so I'd not 
return snapshots for a performance critical application that does not need 
them. How about map /.reserved/.blockIdToFiles/poolid_blockid to current hdfs 
file, and /.reserved/.blockIdToFiles/poolid_blockid/all to current hdfs file 
plus snapshots?

> Get HDFS file name based on block pool id and block id
> --
>
> Key: HDFS-8246
> URL: https://issues.apache.org/jira/browse/HDFS-8246
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: HDFS, hdfs-client, namenode
>Reporter: feng xu
>Assignee: feng xu
>  Labels: BB2015-05-TBR
> Attachments: HDFS-8246.0.patch
>
>
> This feature provides HDFS shell command and C/Java API to retrieve HDFS file 
> name based on block pool id and block id.
> 1. The Java API in class DistributedFileSystem
> public String getFileName(String poolId, long blockId) throws IOException
> 2. The C API in hdfs.c
> char* hdfsGetFileName(hdfsFS fs, const char* poolId, int64_t blockId)
> 3. The HDFS shell command 
>  hdfs dfs [generic options] -fn  
> This feature is useful if you have HDFS block file name in local file system 
> and want to  find out the related HDFS file name in HDFS name space 
> (http://stackoverflow.com/questions/10881449/how-to-find-file-from-blockname-in-hdfs-hadoop).
>   Each HDFS block file name in local file system contains both block pool id 
> and block id, for sample HDFS block file name 
> /hdfs/1/hadoop/hdfs/data/current/BP-97622798-10.3.11.84-1428081035160/current/finalized/subdir0/subdir0/blk_1073741825,
>   the block pool id is BP-97622798-10.3.11.84-1428081035160 and the block id 
> is 1073741825. The block  pool id is uniquely related to a HDFS name 
> node/name space,  and the block id is uniquely related to a HDFS file within 
> a HDFS name node/name space, so the combination of block pool id and a block 
> id is uniquely related a HDFS file name. 
> The shell command and C/Java API do not map the block pool id to name node, 
> so it’s user’s responsibility to talk to the correct name node in federation 
> environment that has multiple name nodes. The block pool id is used by name 
> node to check if the user is talking with the correct name node.
> The implementation is straightforward. The client request to get HDFS file 
> name reaches the new method String getFileName(String poolId, long blockId) 
> in FSNamesystem in name node through RPC,  and the new method does the 
> followings,
> (1)   Validate the block pool id.
> (2)   Create Block  based on the block id.
> (3)   Get BlockInfoContiguous from Block.
> (4)   Get BlockCollection from BlockInfoContiguous.
> (5)   Get file name from BlockCollection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8402) Fsck exit codes are not reliable

2015-05-14 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-8402:
--
Status: Patch Available  (was: Open)

> Fsck exit codes are not reliable
> 
>
> Key: HDFS-8402
> URL: https://issues.apache.org/jira/browse/HDFS-8402
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-8402.patch
>
>
> HDFS-6663 added the ability to check specific blocks.  The exit code is 
> non-deterministically based on the state (corrupt, healthy, etc) of the last 
> displayed block's last storage location - instead of whether any of the 
> checked blocks' storages are corrupt.  Blocks with decommissioning or 
> decommissioned nodes should not be flagged as an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8402) Fsck exit codes are not reliable

2015-05-14 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-8402:
--
Attachment: HDFS-8402.patch

Prior to blockId checking, ie. path based scans, fsck determined exit code 
based on whether the last line contained HEALTHY or CORRUPT.  This doesn't make 
sense when displaying multiple blocks with multiple storages.  Modified NN's 
blockid scans to return a final line similar to path-based scans.  Removed the 
recent logic that flagged nodes with decom nodes as an error.

The real motivation for this patch is to use {{bm.getStorages(block)}} instead 
of directly accessing the storages.  This altered the order of the storages, 
which broke the tests.  The tests were specifically coded (grumble) to ensure 
the last displayed storage was in the state expected by the test.

> Fsck exit codes are not reliable
> 
>
> Key: HDFS-8402
> URL: https://issues.apache.org/jira/browse/HDFS-8402
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-8402.patch
>
>
> HDFS-6663 added the ability to check specific blocks.  The exit code is 
> non-deterministically based on the state (corrupt, healthy, etc) of the last 
> displayed block's last storage location - instead of whether any of the 
> checked blocks' storages are corrupt.  Blocks with decommissioning or 
> decommissioned nodes should not be flagged as an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8397) Refactor the error handling code in DataStreamer

2015-05-14 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8397:
--
Attachment: h8397_20150514.patch

h8397_20150514.patch: fixes some bugs.

> Refactor the error handling code in DataStreamer
> 
>
> Key: HDFS-8397
> URL: https://issues.apache.org/jira/browse/HDFS-8397
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h8397_20150513.patch, h8397_20150514.patch
>
>
> DataStreamer handles (1) bad datanode, (2) restarting datanode and (3) 
> datanode replacement and keeps various state and indexes.  This issue is to 
> clean up the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >