[jira] [Commented] (HDFS-7162) Wrong path when deleting through fuse-dfs a file which already exists in trash

2014-10-02 Thread Chengbing Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156180#comment-14156180
 ] 

Chengbing Liu commented on HDFS-7162:
-

I think there are some misunderstandings, probably due to the title is not 
quite clear. So let me clarify what the patch actually does.

Two problems are fixed in HDFS-7162.2.patch:
- Say we want to delete the file {{/path/to/file}}, and somehow the file 
{{/user/yourname/.Trash/Current/path/to/file}} exists, we expect the file to be 
moved as {{/user/yourname/.Trash/Current/path/to/file.1}}. The actual thing it 
did was moving the file to {{/user/yourname/.Trash/Current/path/tofile.1}}, 
where a slash is missing.
- When judging if the file to be deleted ({{abs_path}}) is already in the 
trash, we compare the {{trash_base}} with {{abs_path}}. The problem is exactly 
as Colin has pointed out. But I don't think we could just add a slash to the 
end of {{trash_base}}, since the given {{abs_path}} can end with 
{{/user/yourname/.Trash/Current}} with no slash at the end. In this case, 
adding a slash to the end of {{trash_base}} would not delete the whold 
{{/user/yourname/.Trash/Current}} directory.

 Wrong path when deleting through fuse-dfs a file which already exists in trash
 --

 Key: HDFS-7162
 URL: https://issues.apache.org/jira/browse/HDFS-7162
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 3.0.0, 2.5.1
Reporter: Chengbing Liu
Assignee: Chengbing Liu
 Attachments: HDFS-7162.2.patch, HDFS-7162.patch


 HDFS-4913 lacks a slash in renaming existing trash file. Very small fix for 
 this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7175) Client-side SocketTimeoutException during Fsck

2014-10-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-7175:

Attachment: HDFS-7175.2.patch

bq. Doing this every 100 is way too frequent.
I agree that doing this every 100 causes slowdown. Updated the patch to do this 
every 10K. [~aw], every 10K is good for you?
Since fsck should not fail by SocketTimeoutException in any environment and the 
timeout is not configurable (hard-coded to 60 sec), I'm thinking every 100K~ is 
not a good idea.

 Client-side SocketTimeoutException during Fsck
 --

 Key: HDFS-7175
 URL: https://issues.apache.org/jira/browse/HDFS-7175
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Carl Steinbach
Assignee: Akira AJISAKA
 Attachments: HDFS-7175.2.patch, HDFS-7175.patch, HDFS-7175.patch


 HDFS-2538 disabled status reporting for the fsck command (it can optionally 
 be enabled with the -showprogress option). We have observed that without 
 status reporting the client will abort with read timeout:
 {noformat}
 [hdfs@lva1-hcl0030 ~]$ hdfs fsck / 
 Connecting to namenode via http://lva1-tarocknn01.grid.linkedin.com:50070
 14/09/30 06:03:41 WARN security.UserGroupInformation: 
 PriviledgedActionException as:h...@grid.linkedin.com (auth:KERBEROS) 
 cause:java.net.SocketTimeoutException: Read timed out
 Exception in thread main java.net.SocketTimeoutException: Read timed out
   at java.net.SocketInputStream.socketRead0(Native Method)
   at java.net.SocketInputStream.read(SocketInputStream.java:152)
   at java.net.SocketInputStream.read(SocketInputStream.java:122)
   at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
   at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
   at 
 sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
   at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:312)
   at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:149)
   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:146)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
   at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:145)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:346)
 {noformat}
 Since there's nothing for the client to read it will abort if the time 
 required to complete the fsck operation is longer than the client's read 
 timeout setting.
 I can think of a couple ways to fix this:
 # Set an infinite read timeout on the client side (not a good idea!).
 # Have the server-side write (and flush) zeros to the wire and instruct the 
 client to ignore these characters instead of echoing them.
 # It's possible that flushing an empty buffer on the server-side will trigger 
 an HTTP response with a zero length payload. This may be enough to keep the 
 client from hanging up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7175) Client-side SocketTimeoutException during Fsck

2014-10-02 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-7175:

Status: Patch Available  (was: Open)

 Client-side SocketTimeoutException during Fsck
 --

 Key: HDFS-7175
 URL: https://issues.apache.org/jira/browse/HDFS-7175
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Carl Steinbach
Assignee: Akira AJISAKA
 Attachments: HDFS-7175.2.patch, HDFS-7175.patch, HDFS-7175.patch


 HDFS-2538 disabled status reporting for the fsck command (it can optionally 
 be enabled with the -showprogress option). We have observed that without 
 status reporting the client will abort with read timeout:
 {noformat}
 [hdfs@lva1-hcl0030 ~]$ hdfs fsck / 
 Connecting to namenode via http://lva1-tarocknn01.grid.linkedin.com:50070
 14/09/30 06:03:41 WARN security.UserGroupInformation: 
 PriviledgedActionException as:h...@grid.linkedin.com (auth:KERBEROS) 
 cause:java.net.SocketTimeoutException: Read timed out
 Exception in thread main java.net.SocketTimeoutException: Read timed out
   at java.net.SocketInputStream.socketRead0(Native Method)
   at java.net.SocketInputStream.read(SocketInputStream.java:152)
   at java.net.SocketInputStream.read(SocketInputStream.java:122)
   at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
   at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
   at 
 sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
   at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:312)
   at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:149)
   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:146)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
   at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:145)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:346)
 {noformat}
 Since there's nothing for the client to read it will abort if the time 
 required to complete the fsck operation is longer than the client's read 
 timeout setting.
 I can think of a couple ways to fix this:
 # Set an infinite read timeout on the client side (not a good idea!).
 # Have the server-side write (and flush) zeros to the wire and instruct the 
 client to ignore these characters instead of echoing them.
 # It's possible that flushing an empty buffer on the server-side will trigger 
 an HTTP response with a zero length payload. This may be enough to keep the 
 client from hanging up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7175) Client-side SocketTimeoutException during Fsck

2014-10-02 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156224#comment-14156224
 ] 

Akira AJISAKA commented on HDFS-7175:
-

bq. I'll test the patch in my environment.
I've tested on my VM and confirmed flushing an empty buffer prevents 
SocketTimeoutException.

 Client-side SocketTimeoutException during Fsck
 --

 Key: HDFS-7175
 URL: https://issues.apache.org/jira/browse/HDFS-7175
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Carl Steinbach
Assignee: Akira AJISAKA
 Attachments: HDFS-7175.2.patch, HDFS-7175.patch, HDFS-7175.patch


 HDFS-2538 disabled status reporting for the fsck command (it can optionally 
 be enabled with the -showprogress option). We have observed that without 
 status reporting the client will abort with read timeout:
 {noformat}
 [hdfs@lva1-hcl0030 ~]$ hdfs fsck / 
 Connecting to namenode via http://lva1-tarocknn01.grid.linkedin.com:50070
 14/09/30 06:03:41 WARN security.UserGroupInformation: 
 PriviledgedActionException as:h...@grid.linkedin.com (auth:KERBEROS) 
 cause:java.net.SocketTimeoutException: Read timed out
 Exception in thread main java.net.SocketTimeoutException: Read timed out
   at java.net.SocketInputStream.socketRead0(Native Method)
   at java.net.SocketInputStream.read(SocketInputStream.java:152)
   at java.net.SocketInputStream.read(SocketInputStream.java:122)
   at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
   at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
   at 
 sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
   at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:312)
   at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:149)
   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:146)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
   at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:145)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:346)
 {noformat}
 Since there's nothing for the client to read it will abort if the time 
 required to complete the fsck operation is longer than the client's read 
 timeout setting.
 I can think of a couple ways to fix this:
 # Set an infinite read timeout on the client side (not a good idea!).
 # Have the server-side write (and flush) zeros to the wire and instruct the 
 client to ignore these characters instead of echoing them.
 # It's possible that flushing an empty buffer on the server-side will trigger 
 an HTTP response with a zero length payload. This may be enough to keep the 
 client from hanging up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7175) Client-side SocketTimeoutException during Fsck

2014-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156227#comment-14156227
 ] 

Hadoop QA commented on HDFS-7175:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12672511/HDFS-7175.2.patch
  against trunk revision 9e40de6.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8297//console

This message is automatically generated.

 Client-side SocketTimeoutException during Fsck
 --

 Key: HDFS-7175
 URL: https://issues.apache.org/jira/browse/HDFS-7175
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Carl Steinbach
Assignee: Akira AJISAKA
 Attachments: HDFS-7175.2.patch, HDFS-7175.patch, HDFS-7175.patch


 HDFS-2538 disabled status reporting for the fsck command (it can optionally 
 be enabled with the -showprogress option). We have observed that without 
 status reporting the client will abort with read timeout:
 {noformat}
 [hdfs@lva1-hcl0030 ~]$ hdfs fsck / 
 Connecting to namenode via http://lva1-tarocknn01.grid.linkedin.com:50070
 14/09/30 06:03:41 WARN security.UserGroupInformation: 
 PriviledgedActionException as:h...@grid.linkedin.com (auth:KERBEROS) 
 cause:java.net.SocketTimeoutException: Read timed out
 Exception in thread main java.net.SocketTimeoutException: Read timed out
   at java.net.SocketInputStream.socketRead0(Native Method)
   at java.net.SocketInputStream.read(SocketInputStream.java:152)
   at java.net.SocketInputStream.read(SocketInputStream.java:122)
   at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
   at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
   at 
 sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
   at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:312)
   at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:149)
   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:146)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
   at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:145)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:346)
 {noformat}
 Since there's nothing for the client to read it will abort if the time 
 required to complete the fsck operation is longer than the client's read 
 timeout setting.
 I can think of a couple ways to fix this:
 # Set an infinite read timeout on the client side (not a good idea!).
 # Have the server-side write (and flush) zeros to the wire and instruct the 
 client to ignore these characters instead of echoing them.
 # It's possible that flushing an empty buffer on the server-side will trigger 
 an HTTP response with a zero length payload. This may be enough to keep the 
 client from hanging up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3107) HDFS truncate

2014-10-02 Thread dhruba borthakur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156265#comment-14156265
 ] 

dhruba borthakur commented on HDFS-3107:


Hi KonstantinS, I have been following this jira, mostly as a passive observer. 

Can you pl explain me the use-case for truncate? You might have already 
explained this earlier, but if you could again elaborate the reason why you 
need truncate. I would appreciate it a lot.

from your comments, it feels that you have a database layer on top of hdfs and 
the database is using an hdfs file as the transaction log.  But I am not able 
to understand the rest of the story.

 HDFS truncate
 -

 Key: HDFS-3107
 URL: https://issues.apache.org/jira/browse/HDFS-3107
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Lei Chang
Assignee: Plamen Jeliazkov
 Attachments: HDFS-3107.patch, HDFS-3107.patch, HDFS-3107.patch, 
 HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, HDFS_truncate.pdf, 
 HDFS_truncate_semantics_Mar15.pdf, HDFS_truncate_semantics_Mar21.pdf, 
 editsStored

   Original Estimate: 1,344h
  Remaining Estimate: 1,344h

 Systems with transaction support often need to undo changes made to the 
 underlying storage when a transaction is aborted. Currently HDFS does not 
 support truncate (a standard Posix operation) which is a reverse operation of 
 append, which makes upper layer applications use ugly workarounds (such as 
 keeping track of the discarded byte range per file in a separate metadata 
 store, and periodically running a vacuum process to rewrite compacted files) 
 to overcome this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6927) Add unit tests

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156333#comment-14156333
 ] 

Hudson commented on HDFS-6927:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6927. Initial unit tests for Lazy Persist files. (Arpit Agarwal) 
(aagarwal: rev 3f64c4aaf00d92659ae992bfe7fe8403b4013ae6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Add unit tests
 --

 Key: HDFS-6927
 URL: https://issues.apache.org/jira/browse/HDFS-6927
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6927.01.patch


 Add a bunch of unit tests to cover flag persistence, propagation to DN, 
 ability to write replicas to RAM disk, lazy writes to disk and eviction from 
 RAM disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6930) Improve replica eviction from RAM disk

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156315#comment-14156315
 ] 

Hudson commented on HDFS-6930:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6930. Improve replica eviction from RAM disk. (Arpit Agarwal) (arp: rev 
cb9b485075ce773f2d6189aa2f54bbc69aad4dab)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java


 Improve replica eviction from RAM disk
 --

 Key: HDFS-6930
 URL: https://issues.apache.org/jira/browse/HDFS-6930
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6930.01.patch, HDFS-6930.02.patch


 The current replica eviction scheme is inefficient since it performs multiple 
 file operations in the context of block allocation.
 A better implementation would be asynchronous eviction when free space on RAM 
 disk falls below a low watermark to make block allocation faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6921) Add LazyPersist flag to FileStatus

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156326#comment-14156326
 ] 

Hudson commented on HDFS-6921:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6921. Add LazyPersist flag to FileStatus. (Arpit Agarwal) (aagarwal: rev 
a7bcc9535860214380e235641d1d5d2dd15aee58)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java


 Add LazyPersist flag to FileStatus
 --

 Key: HDFS-6921
 URL: https://issues.apache.org/jira/browse/HDFS-6921
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6921.01.patch, HDFS-6921.02.patch


 A new flag will be added to FileStatus to indicate that a file can be lazily 
 persisted to disk i.e. trading reduced durability for better write 
 performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6931) Move lazily persisted replicas to finalized directory on DN startup

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156314#comment-14156314
 ] 

Hudson commented on HDFS-6931:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6931. Move lazily persisted replicas to finalized directory on DN startup. 
(Arpit Agarwal) (arp: rev c92837aeab5188f6171d4016f91b3b4936a66beb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Move lazily persisted replicas to finalized directory on DN startup
 ---

 Key: HDFS-6931
 URL: https://issues.apache.org/jira/browse/HDFS-6931
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6931.01.patch


 On restart the DN should move replicas from the {{current/lazyPersist/}} 
 directory to {{current/finalized}}. Duplicate replicas of the same block 
 should be deleted from RAM disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7108) Fix unit test failures in SimulatedFsDataset

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156338#comment-14156338
 ] 

Hudson commented on HDFS-7108:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7108. Fix unit test failures in SimulatedFsDataset. (Arpit Agarwal) (arp: 
rev 50b321068d32d404cc9b5d392f0e20d48cabbf2b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Fix unit test failures in SimulatedFsDataset
 

 Key: HDFS-7108
 URL: https://issues.apache.org/jira/browse/HDFS-7108
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-7108.01.patch


 HDFS-7100 introduced a few unit test failures due to 
 UnsupportedOperationException in {{SimulatedFsDataset.getVolume}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6923) Propagate LazyPersist flag to DNs via DataTransferProtocol

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156328#comment-14156328
 ] 

Hudson commented on HDFS-6923:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6923. Propagate LazyPersist flag to DNs via DataTransferProtocol. (Arpit 
Agarwal) (aagarwal: rev c2354a7f81ff5a48a5b65d25e1036d3e0ba86420)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


 Propagate LazyPersist flag to DNs via DataTransferProtocol
 --

 Key: HDFS-6923
 URL: https://issues.apache.org/jira/browse/HDFS-6923
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6923.01.patch, HDFS-6923.02.patch


 If the LazyPersist flag is set in the file properties, the DFSClient will 
 propagate it to the DataNode via DataTransferProtocol.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7144) Fix findbugs warnings in RamDiskReplicaTracker

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156323#comment-14156323
 ] 

Hudson commented on HDFS-7144:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7144. Fix findbugs warnings in RamDiskReplicaTracker. (Contributed by Tsz 
Wo Nicholas Sze) (arp: rev 364e60b1691a4d7b2f745b8ebf78177f254a4287)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java


 Fix findbugs warnings in RamDiskReplicaTracker
 --

 Key: HDFS-7144
 URL: https://issues.apache.org/jira/browse/HDFS-7144
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Fix For: 3.0.0

 Attachments: h7144_20140925.patch


 Two more findbugs warnings:
 - Bad practice Warnings
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaTracker$RamDiskReplica.deleteSavedFiles()
  ignores exceptional return value of java.io.File.delete()
 Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details)
 In class 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaTracker$RamDiskReplica
 In method 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaTracker$RamDiskReplica.deleteSavedFiles()
 Called method java.io.File.delete()
 At RamDiskReplicaTracker.java:[line 122]
 Another occurrence at RamDiskReplicaTracker.java:[line 127]
 - Dodgy Warnings
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaLruTracker$RamDiskReplicaLru
  doesn't override RamDiskReplicaTracker$RamDiskReplica.equals(Object)
 Bug type EQ_DOESNT_OVERRIDE_EQUALS (click for details)
 In class 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaLruTracker$RamDiskReplicaLru
 Did you intend to override 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaTracker$RamDiskReplica.equals(Object)
 At RamDiskReplicaLruTracker.java:[lines 37-42]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156319#comment-14156319
 ] 

Hudson commented on HDFS-6581:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6950. Add Additional unit tests for HDFS-6581. (Contributed by Xiaoyu Yao) 
(arp: rev 762b04e9943d6a05e1130fc81ada5b5dc8baab2c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
HDFS-7064. Fix unit test failures in HDFS-6581 branch. (Contributed by Xiaoyu 
Yao) (arp: rev 4603e4481f0486afcce6b106d4a92a6e90e5b6d9)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
HDFS-7079. Few more unit test fixes for HDFS-6581. (Arpit Agarwal) (arp: rev 
dcbc46730131a1bdf8416efeb4571794e5c8e369)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
HDFS-7143. Fix findbugs warnings in HDFS-6581 branch. (Contributed by Tsz Wo 
Nicholas Sze) (arp: rev feda4733a8279485fc0ff1271f9c22bc44f333f6)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
HDFS-7171. Fix Jenkins failures in HDFS-6581 branch. (Arpit Agarwal) (arp: rev 
a45ad330facc56f06ed42eb71304c49ef56dc549)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
HDFS-6581. Update CHANGES.txt in preparation for trunk merge (arp: rev 
04b08431a3446300f4715cf135f0e60f85e5bf5a)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Write to single replica in memory
 -

 Key: HDFS-6581
 URL: https://issues.apache.org/jira/browse/HDFS-6581
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6581.merge.01.patch, HDFS-6581.merge.02.patch, 
 HDFS-6581.merge.03.patch, HDFS-6581.merge.04.patch, HDFS-6581.merge.05.patch, 
 HDFS-6581.merge.06.patch, HDFS-6581.merge.07.patch, HDFS-6581.merge.08.patch, 
 HDFS-6581.merge.09.patch, HDFS-6581.merge.10.patch, HDFS-6581.merge.11.patch, 
 HDFS-6581.merge.12.patch, HDFS-6581.merge.14.patch, HDFS-6581.merge.15.patch, 
 HDFSWriteableReplicasInMemory.pdf, 
 Test-Plan-for-HDFS-6581-Memory-Storage.pdf, 
 Test-Plan-for-HDFS-6581-Memory-Storage.pdf


 Per discussion with the community on HDFS-5851, we will implement writing to 
 a single replica in DN memory via DataTransferProtocol.
 This avoids some of the issues with short-circuit writes, which we can 
 revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7143) Fix findbugs warnings in HDFS-6581 branch

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156321#comment-14156321
 ] 

Hudson commented on HDFS-7143:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7143. Fix findbugs warnings in HDFS-6581 branch. (Contributed by Tsz Wo 
Nicholas Sze) (arp: rev feda4733a8279485fc0ff1271f9c22bc44f333f6)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java


 Fix findbugs warnings in HDFS-6581 branch
 -

 Key: HDFS-7143
 URL: https://issues.apache.org/jira/browse/HDFS-7143
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 3.0.0

 Attachments: h7143_20140925.patch


 There are 4 findbugs warnings reported by Jenkins.
 https://builds.apache.org/job/PreCommit-HDFS-Build/8064/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6978) Directory scanner should correctly reconcile blocks on RAM disk

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156317#comment-14156317
 ] 

Hudson commented on HDFS-6978:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6978. Directory scanner should correctly reconcile blocks on RAM disk. 
(Arpit Agarwal) (arp: rev 9f22fb8c9a10952225e15c7b67b5f77fa44b155d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Directory scanner should correctly reconcile blocks on RAM disk
 ---

 Key: HDFS-6978
 URL: https://issues.apache.org/jira/browse/HDFS-6978
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6978.01.patch, HDFS-6978.02.patch


 It used to be very unlikely that the directory scanner encountered two 
 replicas of the same block on different volumes.
 With memory storage, it is very likely to hit this with the following 
 sequence of events:
 # Block is written to RAM disk
 # Lazy writer saves a copy on persistent volume
 # DN attempts to evict the original replica from RAM disk, file deletion 
 fails as the replica is in use.
 # Directory scanner finds a replica on both RAM disk and persistent storage.
 The directory scanner should never delete the block on persistent storage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6924) Add new RAM_DISK storage type

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156327#comment-14156327
 ] 

Hudson commented on HDFS-6924:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6924. Add new RAM_DISK storage type. (Arpit Agarwal) (aagarwal: rev 
7f49537ba18f830dff172f5f9c4a387fe7ab410f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/StorageType.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java


 Add new RAM_DISK storage type
 -

 Key: HDFS-6924
 URL: https://issues.apache.org/jira/browse/HDFS-6924
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6924.01.patch


 Add a new RAM_DISK storage type which could be backed by tmpfs/ramfs on Linux 
 or alternative RAM disk on other platforms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6482) Use block ID-based block layout on datanodes

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156320#comment-14156320
 ] 

Hudson commented on HDFS-6482:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6482. Fix CHANGES.txt in trunk (arp: rev 
be30c86cc9f71894dc649ed22983e5c42e9b6951)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Use block ID-based block layout on datanodes
 

 Key: HDFS-6482
 URL: https://issues.apache.org/jira/browse/HDFS-6482
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0
Reporter: James Thomas
Assignee: James Thomas
 Fix For: 2.6.0

 Attachments: 6482-design.doc, HDFS-6482.1.patch, HDFS-6482.2.patch, 
 HDFS-6482.3.patch, HDFS-6482.4.patch, HDFS-6482.5.patch, HDFS-6482.6.patch, 
 HDFS-6482.7.patch, HDFS-6482.8.patch, HDFS-6482.9.patch, HDFS-6482.patch, 
 hadoop-24-datanode-dir.tgz


 Right now blocks are placed into directories that are split into many 
 subdirectories when capacity is reached. Instead we can use a block's ID to 
 determine the path it should go in. This eliminates the need for the LDir 
 data structure that facilitates the splitting of directories when they reach 
 capacity as well as fields in ReplicaInfo that keep track of a replica's 
 location.
 An extension of the work in HDFS-3290.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6922) Add LazyPersist flag to INodeFile, save it in FsImage and edit logs

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156324#comment-14156324
 ] 

Hudson commented on HDFS-6922:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6922. Add LazyPersist flag to INodeFile, save it in FsImage and edit logs. 
(Arpit Agarwal) (aagarwal: rev 042b33f20b01aadb5cd03da731ae7a3d94026aac)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java


 Add LazyPersist flag to INodeFile, save it in FsImage and edit logs
 ---

 Key: HDFS-6922
 URL: https://issues.apache.org/jira/browse/HDFS-6922
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6922.01.patch, HDFS-6922.02.patch


 Support for saving the LazyPersist flag in the FsImage and edit logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6926) DN support for saving replicas to persistent storage and evicting in-memory replicas

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156329#comment-14156329
 ] 

Hudson commented on HDFS-6926:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6926. DN support for saving replicas to persistent storage and evicting 
in-memory replicas. (Arpit Agarwal) (aagarwal: rev 
eb448e14399e17f11b9e523e4050de245b9b0408)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/AvailableSpaceVolumeChoosingPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java


 DN support for saving replicas to persistent storage and evicting in-memory 
 replicas
 

 Key: HDFS-6926
 URL: https://issues.apache.org/jira/browse/HDFS-6926
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6926.01.patch


 Add the following:
 # A lazy writer on the DN to move replicas from RAM disk to persistent 
 storage.
 # 'Evict' persisted replicas from RAM disk to make space for new blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6929) NN periodically unlinks lazy persist files with missing replicas from namespace

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156335#comment-14156335
 ] 

Hudson commented on HDFS-6929:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6929. NN periodically unlinks lazy persist files with missing replicas 
from namespace. (Arpit Agarwal) (aagarwal: rev 
2e987148e02d0087fc70ce5b1ce571d3324bf1dd)
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java


 NN periodically unlinks lazy persist files with missing replicas from 
 namespace
 ---

 Key: HDFS-6929
 URL: https://issues.apache.org/jira/browse/HDFS-6929
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6929.01.patch, HDFS-6929.02.patch


 Occasional data loss is expected when using the lazy persist flag due to node 
 restarts. The NN will optionally unlink lazy persist files from the namespace 
 to avoid them from showing up as corrupt files.
 This behavior can be turned off with a global option. In the future this may 
 be made a per-file option controllable by the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6977) Delete all copies when a block is deleted from the block space

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156322#comment-14156322
 ] 

Hudson commented on HDFS-6977:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6977. Delete all copies when a block is deleted from the block space. 
(Arpit Agarwal) (arp: rev ccdf0054a354fc110124b83de742c2ee6076449e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Delete all copies when a block is deleted from the block space
 --

 Key: HDFS-6977
 URL: https://issues.apache.org/jira/browse/HDFS-6977
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Nathan Yao
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6977.01.patch, HDFS-6977.02.patch, 
 HDFS-6977.03.patch


 When a block is deleted from RAM disk we should also delete the copies 
 written to lazyPersist/.
 Reported by [~xyao]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6925) DataNode should attempt to place replicas on transient storage first if lazyPersist flag is received

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156331#comment-14156331
 ] 

Hudson commented on HDFS-6925:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6925. DataNode should attempt to place replicas on transient storage first 
if lazyPersist flag is received. (Arpit Agarwal) (aagarwal: rev 
a317bd7b02c37bd57743bfad59593ec12f53f4ed)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/AvailableSpaceVolumeChoosingPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/Replica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaUnderRecovery.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/BlockReportTestBase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsTransientVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImplAllocator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/RoundRobinVolumeChoosingPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteBlockGetsBlockLengthHint.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java


 DataNode should attempt to place replicas on transient storage first if 
 lazyPersist flag is received
 

 Key: HDFS-6925
 URL: https://issues.apache.org/jira/browse/HDFS-6925
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
 Environment: If the LazyPersist flag is received via 
 DataTransferProtocol then DN should attempt to place the files on RAM disk 
 first, and failing that on regular disk.
 Support for lazily moving replicas from RAM disk to persistent storage will 
 be added later.
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6925.01.patch, HDFS-6925.02.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6932) Balancer and Mover tools should ignore replicas on RAM_DISK

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156313#comment-14156313
 ] 

Hudson commented on HDFS-6932:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6932. Balancer and Mover tools should ignore replicas on RAM_DISK. 
(Contributed by Xiaoyu Yao) (arp: rev e8e7fbe81abc64a9ae3d2f3f62c088426073b2bf)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/StorageType.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java


 Balancer and Mover tools should ignore replicas on RAM_DISK
 ---

 Key: HDFS-6932
 URL: https://issues.apache.org/jira/browse/HDFS-6932
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-6932.0.patch, HDFS-6932.1.patch, HDFS-6932.2.patch, 
 HDFS-6932.3.patch


 Per title, balancer and mover should just ignore replicas on RAM disk instead 
 of attempting to move them to other nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6960) Bugfix in LazyWriter, fix test case and some refactoring

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156336#comment-14156336
 ] 

Hudson commented on HDFS-6960:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6960. Bugfix in LazyWriter, fix test case and some refactoring. (Arpit 
Agarwal) (arp: rev 4cf9afacbe3d0814fb616d238aa9b16b1ae68386)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java


 Bugfix in LazyWriter, fix test case and some refactoring
 

 Key: HDFS-6960
 URL: https://issues.apache.org/jira/browse/HDFS-6960
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, test
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6960.01.patch, HDFS-6960.02.patch


 LazyWriter has a bug. While saving the replica to disk we would save it under 
 {{current/lazyPersist/}}. Instead it should be saved under the appropriate 
 subdirectory e.g. {{current/lazyPersist/subdir1/subdir0/}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7100) Make eviction scheme pluggable

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156312#comment-14156312
 ] 

Hudson commented on HDFS-7100:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7100. Make eviction scheme pluggable. (Arpit Agarwal) (arp: rev 
b2d5ed36bcb80e2581191dcdc3976e825c959142)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java


 Make eviction scheme pluggable
 --

 Key: HDFS-7100
 URL: https://issues.apache.org/jira/browse/HDFS-7100
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-7100.01.patch


 We can make the eviction scheme pluggable to help evaluate multiple schemes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6928) 'hdfs put' command should accept lazyPersist flag for testing

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156332#comment-14156332
 ] 

Hudson commented on HDFS-6928:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6928. 'hdfs put' command should accept lazyPersist flag for testing. 
(Arpit Agarwal) (arp: rev bbaa7dc28db75d9b3700c6ff95222d8e1de29c15)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Stat.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* hadoop-common-project/hadoop-common/src/test/resources/testConf.xml


 'hdfs put' command should accept lazyPersist flag for testing
 -

 Key: HDFS-6928
 URL: https://issues.apache.org/jira/browse/HDFS-6928
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Tassapol Athiapinya
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6928.01.patch, HDFS-6928.02.patch, 
 HDFS-6928.03.patch


 Add a '-l' flag to 'hdfs put' which creates the file with the LAZY_PERSIST 
 option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7171) Fix Jenkins failures in HDFS-6581 branch

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156359#comment-14156359
 ] 

Hudson commented on HDFS-7171:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7171. Fix Jenkins failures in HDFS-6581 branch. (Arpit Agarwal) (arp: rev 
a45ad330facc56f06ed42eb71304c49ef56dc549)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 Fix Jenkins failures in HDFS-6581 branch
 

 Key: HDFS-7171
 URL: https://issues.apache.org/jira/browse/HDFS-7171
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-7171.01.patch


 Jenkins flagged a few failures with the latest merge patch.
 {quote}
 Test results: 
 https://builds.apache.org/job/PreCommit-HDFS-Build/8269//testReport/
 Findbugs warnings: 
 https://builds.apache.org/job/PreCommit-HDFS-Build/8269//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6134) Transparent data at rest encryption

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156361#comment-14156361
 ] 

Hudson commented on HDFS-6134:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following 
merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-mapreduce-project/CHANGES.txt


 Transparent data at rest encryption
 ---

 Key: HDFS-6134
 URL: https://issues.apache.org/jira/browse/HDFS-6134
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0, 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Charles Lamb
 Fix For: 2.6.0

 Attachments: HDFS-6134.001.patch, HDFS-6134.002.patch, 
 HDFS-6134_test_plan.pdf, HDFSDataatRestEncryption.pdf, 
 HDFSDataatRestEncryptionProposal_obsolete.pdf, 
 HDFSEncryptionConceptualDesignProposal-2014-06-20.pdf, 
 fs-encryption.2014-08-18.patch, fs-encryption.2014-08-19.patch


 Because of privacy and security regulations, for many industries, sensitive 
 data at rest must be in encrypted form. For example: the health­care industry 
 (HIPAA regulations), the card payment industry (PCI DSS regulations) or the 
 US government (FISMA regulations).
 This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can 
 be used transparently by any application accessing HDFS via Hadoop Filesystem 
 Java API, Hadoop libhdfs C library, or WebHDFS REST API.
 The resulting implementation should be able to be used in compliance with 
 different regulation requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6894) Add XDR parser method for each NFS response

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156347#comment-14156347
 ] 

Hudson commented on HDFS-6894:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6894. Add XDR parser method for each NFS response. Contributed by Brandon 
Li. (wheat9: rev 875aa797caee96572162ff59bc50cf97d1195348)
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/ACCESS3Response.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/SETATTR3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/READDIR3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/NFS3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/CREATE3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/LOOKUP3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/RMDIR3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/READDIRPLUS3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/SYMLINK3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/REMOVE3Response.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/GETATTR3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/MKNOD3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/WccData.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/LINK3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/COMMIT3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/WRITE3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/MKDIR3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/FSSTAT3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/READ3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/WccAttr.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/READLINK3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/RENAME3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/FSINFO3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/PATHCONF3Response.java


 Add XDR parser method for each NFS response
 ---

 Key: HDFS-6894
 URL: https://issues.apache.org/jira/browse/HDFS-6894
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.6.0

 Attachments: HDFS-6894.001.patch, HDFS-6894.001.patch, 
 HDFS-6894.002.patch


 This can be an abstract method in NFS3Response to force the subclasses to 
 implement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7084) FsDatasetImpl#copyBlockFiles debug log can be improved

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156355#comment-14156355
 ] 

Hudson commented on HDFS-7084:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7084. FsDatasetImpl#copyBlockFiles debug log can be improved. (Contributed 
by Xiaoyu Yao) (arp: rev 5e4627d0fb2a2d608c0e67fd6ad835523ed3259d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 FsDatasetImpl#copyBlockFiles debug log can be improved
 --

 Key: HDFS-7084
 URL: https://issues.apache.org/jira/browse/HDFS-7084
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-7084.0.patch


 addBlock: Moved should be replaced with Copied  or lazyPersistReplica : 
 Copied to avoid confusion.
 {code}
   static File[] copyBlockFiles(long blockId, long genStamp,
File srcMeta, File srcFile, File destRoot)
   {
... 
 if (LOG.isDebugEnabled()) {
   LOG.debug(addBlock: Moved  + srcMeta +  to  + dstMeta);
   LOG.debug(addBlock: Moved  + srcFile +  to  + dstFile);
 }
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7155) Bugfix in createLocatedFileStatus caused by bad merge

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156337#comment-14156337
 ] 

Hudson commented on HDFS-7155:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7155. Bugfix in createLocatedFileStatus caused by bad merge. (Arpit 
Agarwal) (arp: rev a2d4edacea943c98fe4430e295627cd7535948fc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


 Bugfix in createLocatedFileStatus caused by bad merge
 -

 Key: HDFS-7155
 URL: https://issues.apache.org/jira/browse/HDFS-7155
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-7155.01.patch


 FsDirectory.createLocatedFileStatus fails to initialize the blockSize.
 Likely caused by a bad merge.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7158) Reduce the memory usage of WebImageViewer

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156345#comment-14156345
 ] 

Hudson commented on HDFS-7158:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7158. Reduce the memory usage of WebImageViewer. Contributed by Haohui 
Mai. (wheat9: rev 1f5b42ac4881b734c799bfb527884c0d117929bd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java


 Reduce the memory usage of WebImageViewer
 -

 Key: HDFS-7158
 URL: https://issues.apache.org/jira/browse/HDFS-7158
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.6.0

 Attachments: HDFS-7158.000.patch, HDFS-7158.001.patch, 
 HDFS-7158.002.patch


 Currently the webimageviewer can take up as much memory as the NN uses in 
 order to serve the WebHDFS requests from the client.
 This jira proposes to optimize the memory usage of webimageviewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7172) Test data files may be checked out of git with incorrect line endings, causing test failures in TestHDFSCLI.

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156360#comment-14156360
 ] 

Hudson commented on HDFS-7172:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7172. Test data files may be checked out of git with incorrect line 
endings, causing test failures in TestHDFSCLI. Contributed by Chris Nauroth. 
(wheat9: rev 737f280ddeed58a2b1cd42c29533a01e7c6c3426)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/.gitattributes
Addendum patch for HDFS-7172. Contributed by Chris Nauroth. (wheat9: rev 
8dfe54f6d3e2f14584846de29ee06ed280bc0f0e)
* hadoop-hdfs-project/hadoop-hdfs/pom.xml


 Test data files may be checked out of git with incorrect line endings, 
 causing test failures in TestHDFSCLI.
 

 Key: HDFS-7172
 URL: https://issues.apache.org/jira/browse/HDFS-7172
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Trivial
 Fix For: 2.6.0

 Attachments: HDFS-7172.1.patch, HDFS-7172.rat.patch


 {{TestHDFSCLI}} uses several files at src/test/resource/data* as test input 
 files.  Some of the tests expect a specific length for these files.  If they 
 get checked out of git with CRLF line endings by mistake, then the test 
 assertions will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7079) Few more unit test fixes for HDFS-6581

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156344#comment-14156344
 ] 

Hudson commented on HDFS-7079:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7079. Few more unit test fixes for HDFS-6581. (Arpit Agarwal) (arp: rev 
dcbc46730131a1bdf8416efeb4571794e5c8e369)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Few more unit test fixes for HDFS-6581
 --

 Key: HDFS-7079
 URL: https://issues.apache.org/jira/browse/HDFS-7079
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-7079.03.patch, HDFS-7079.04.patch


 Fix a few more test cases flagged by Jenkins:
 # TestFsShellCopy
 # TestCopy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7153) Add storagePolicy to NN edit log during file creation

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156339#comment-14156339
 ] 

Hudson commented on HDFS-7153:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7153. Add storagePolicy to NN edit log during file creation. (Arpit 
Agarwal) (arp: rev d45e7c7e856c7103752888c0395fa94985cd7670)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java


 Add storagePolicy to NN edit log during file creation
 -

 Key: HDFS-7153
 URL: https://issues.apache.org/jira/browse/HDFS-7153
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.6.0

 Attachments: HDFS-7153.01.patch, HDFS-7153.02.patch, 
 HDFS-7153.03.patch, HDFS-7153.merge.04.patch, HDFS-7153.merge.05.patch, 
 editsStored


 Storage Policy ID is currently not logged in the NN edit log during file 
 creation as part of {{AddOp}}. This is okay for now since we don't have an 
 API to set storage policy during file creation.
 However now that we have storage policies, for HDFS-6581 we are looking into 
 using the feature instead of adding a new field to the INodeFile header. It 
 would be useful to have the ability to save policy on file create.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7091) Add forwarding constructor for INodeFile for existing callers

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156341#comment-14156341
 ] 

Hudson commented on HDFS-7091:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7091. Add forwarding constructor for INodeFile for existing callers. 
(Arpit Agarwal) (arp: rev e79c98c11fa8b4ddd8c63b613698d2d508135e83)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java


 Add forwarding constructor for INodeFile for existing callers
 -

 Key: HDFS-7091
 URL: https://issues.apache.org/jira/browse/HDFS-7091
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode, test
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-7091.01.patch


 Since HDFS-6584 is in trunk we are hitting quite a few merge conflicts.
 Many of the conflicts can be avoided by some minor updates to the branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6991) Notify NN of evicted block before deleting it from RAM disk

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156353#comment-14156353
 ] 

Hudson commented on HDFS-6991:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6991. Notify NN of evicted block before deleting it from RAM disk. (Arpit 
Agarwal) (arp: rev a18caf7753623a94a7cdb1c607cda79586de08dc)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java


 Notify NN of evicted block before deleting it from RAM disk
 ---

 Key: HDFS-6991
 URL: https://issues.apache.org/jira/browse/HDFS-6991
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6991.01.patch, HDFS-6991.02.patch, 
 HDFS-6991.03.patch


 Couple of bug fixes required around eviction:
 # When evicting a block from RAM disk to persistent storage, the DN should 
 schedule an incremental block report for a 'received' replica on persistent 
 storage.
 # {{BlockManager.processReportedBlock}} needs a fix to correctly update the 
 storage ID to reflect the block moving from RAM_DISK to DISK.
 Found by [~xyao] via HDFS-6950.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7064) Fix unit test failures in HDFS-6581 branch

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156354#comment-14156354
 ] 

Hudson commented on HDFS-7064:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7064. Fix unit test failures in HDFS-6581 branch. (Contributed by Xiaoyu 
Yao) (arp: rev 4603e4481f0486afcce6b106d4a92a6e90e5b6d9)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java


 Fix unit test failures in HDFS-6581 branch
 --

 Key: HDFS-7064
 URL: https://issues.apache.org/jira/browse/HDFS-7064
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-7064.0.patch, HDFS-7064.1.patch, HDFS-7064.2.patch


 Fix test failures in the HDFS-6581 feature branch.
 Jenkins flagged the following failures.
 https://builds.apache.org/job/PreCommit-HDFS-Build/8025//testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7159) Use block storage policy to set lazy persist preference

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156346#comment-14156346
 ] 

Hudson commented on HDFS-7159:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7159. Use block storage policy to set lazy persist preference. (Arpit 
Agarwal) (arp: rev bb84f1fccb18c6c7373851e05d2451d55e908242)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Stat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestScrLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/StorageType.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImplAllocator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsTransientVolumeImpl.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 

[jira] [Commented] (HDFS-7176) The namenode usage message doesn't include -rollingupgrade started

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156358#comment-14156358
 ] 

Hudson commented on HDFS-7176:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7176. The namenode usage message doesn't include -rollingupgrade started 
(cmccabe) (cmccabe: rev dd1b8f2ed8a86871517c730a9f370aca4b763514)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 The namenode usage message doesn't include -rollingupgrade started
 

 Key: HDFS-7176
 URL: https://issues.apache.org/jira/browse/HDFS-7176
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.6.0

 Attachments: HDFS-7176.001.patch


 HDFS-6031 re-added the -rollingUpgrade started option for the NameNode, but 
 the help text still doesn't include it:
 {code}
 Usage: java NameNode [-backup] | 
 [-checkpoint] | 
 [-format [-clusterid cid ] [-force] [-nonInteractive] ] | 
 [-upgrade [-clusterid cid] [-renameReservedk-v pairs] ] | 
 [-rollback] | 
 [-rollingUpgrade downgrade|rollback ] | 
 [-finalize] | 
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6754) TestNamenodeCapacityReport.testXceiverCount may sometimes fail due to lack of retry

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156340#comment-14156340
 ] 

Hudson commented on HDFS-6754:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6754. TestNamenodeCapacityReport.testXceiverCount may sometimes fail due 
to lack of retry. Contributed by Mit Desai. (kihwal: rev 
3f25d916d5539917092e2f52a8c2df2cfd647c3c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeCapacityReport.java


 TestNamenodeCapacityReport.testXceiverCount may sometimes fail due to lack of 
 retry
 ---

 Key: HDFS-6754
 URL: https://issues.apache.org/jira/browse/HDFS-6754
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Mit Desai
Assignee: Mit Desai
 Fix For: 2.6.0

 Attachments: HDFS-6754.patch, HDFS-6754.patch


 I have seen TestNamenodeCapacityReport.testXceiverCount fail intermittently 
 in our nightly builds with the following error:
 {noformat}
 java.io.IOException: Unable to close file because the last block does not 
 have enough number of replicas.
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2151)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2119)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.testXceiverCount(TestNamenodeCapacityReport.java:281)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6950) Add Additional unit tests for HDFS-6581

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156348#comment-14156348
 ] 

Hudson commented on HDFS-6950:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-6950. Add Additional unit tests for HDFS-6581. (Contributed by Xiaoyu Yao) 
(arp: rev 762b04e9943d6a05e1130fc81ada5b5dc8baab2c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java


 Add Additional unit tests for HDFS-6581
 ---

 Key: HDFS-6950
 URL: https://issues.apache.org/jira/browse/HDFS-6950
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-6950.0.patch, HDFS-6950.1.patch, HDFS-6950.2.patch


 Create additional unit tests for HDFS-6581 in addition to existing ones in 
 HDFS-6927.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7080) Fix finalize and upgrade unit test failures

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156351#comment-14156351
 ] 

Hudson commented on HDFS-7080:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7080. Fix finalize and upgrade unit test failures. (Arpit Agarwal) (arp: 
rev 4eab083b1b7faf4485274d1d30256cde08e11915)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSFinalize.java


 Fix finalize and upgrade unit test failures
 ---

 Key: HDFS-7080
 URL: https://issues.apache.org/jira/browse/HDFS-7080
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-7080.01.patch, HDFS-7080.02.patch


 Fix following test failures in the branch:
 # TestDFSFinalize
 # TestDFSUpgrade



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7071) Updated editsStored and editsStored.xml to bump layout version and add LazyPersist flag

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156342#comment-14156342
 ] 

Hudson commented on HDFS-7071:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7071. Updated editsStored and editsStored.xml to bump layout version and 
add LazyPersist flag. (Contributed by Xiaoyu Yao and Arpit Agarwal) (arp: rev 
486a76a39ba236072c2bb22af509a1ae8081093e)
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
HDFS-7071. Undo accidental commit of binary file editsStored. (arp: rev 
8c9860f7c96322908f344d25ef31939739e7df9d)
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored


 Updated editsStored and editsStored.xml to bump layout version and add 
 LazyPersist flag
 ---

 Key: HDFS-7071
 URL: https://issues.apache.org/jira/browse/HDFS-7071
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS-6581
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-7071.0.patch, HDFS-7071.02.patch, editsStored, 
 editsStored


 TestOfflineEditsViewer for Layz_Persist, and also need update for two 
 reference version of editsStored (binary) and editsStored.xml in 
 hadoop-hdfs/src/test/resources
 The fix is to add 
 {code}
  LAZY_PERSISTfalse/LAZY_PERSIST
 {code}
 to editsStore.xml for AddClose OPs and then use the following command to 
 generate the binary file editsStore.
 {code}
 hdfs oev -p binary -i editsStored.xml -o editStored
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7129) Metrics to track usage of memory for writes

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156350#comment-14156350
 ] 

Hudson commented on HDFS-7129:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/698/])
HDFS-7129. Metrics to track usage of memory for writes. (Contributed by Xiaoyu 
Yao) (arp: rev 5e8b6973527e5f714652641ed95e8a4509e18cfa)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/JMXGet.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


 Metrics to track usage of memory for writes
 ---

 Key: HDFS-7129
 URL: https://issues.apache.org/jira/browse/HDFS-7129
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-7129.0.patch, HDFS-7129.1.patch, HDFS-7129.2.patch, 
 HDFS-7129.3.patch


 A few metrics to evaluate feature usage and suggest improvements. Thanks to 
 [~sureshms] for some of these suggestions.
 # Number of times a block in memory was read (before being ejected)
 # Average block size for data written to memory tier
 # Time the block was in memory before being ejected
 # Number of blocks written to memory
 # Number of memory writes requested but not satisfied (failed-over to disk)
 # Number of blocks evicted without ever being read from memory
 # Average delay between memory write and disk write (window where a node 
 restart could cause data loss).
 # Replicas written to disk by lazy writer
 # Bytes written to disk by lazy writer
 # Replicas deleted by application before being persisted to disk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7100) Make eviction scheme pluggable

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156390#comment-14156390
 ] 

Hudson commented on HDFS-7100:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7100. Make eviction scheme pluggable. (Arpit Agarwal) (arp: rev 
b2d5ed36bcb80e2581191dcdc3976e825c959142)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java


 Make eviction scheme pluggable
 --

 Key: HDFS-7100
 URL: https://issues.apache.org/jira/browse/HDFS-7100
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-7100.01.patch


 We can make the eviction scheme pluggable to help evaluate multiple schemes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6931) Move lazily persisted replicas to finalized directory on DN startup

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156392#comment-14156392
 ] 

Hudson commented on HDFS-6931:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6931. Move lazily persisted replicas to finalized directory on DN startup. 
(Arpit Agarwal) (arp: rev c92837aeab5188f6171d4016f91b3b4936a66beb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Move lazily persisted replicas to finalized directory on DN startup
 ---

 Key: HDFS-6931
 URL: https://issues.apache.org/jira/browse/HDFS-6931
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6931.01.patch


 On restart the DN should move replicas from the {{current/lazyPersist/}} 
 directory to {{current/finalized}}. Duplicate replicas of the same block 
 should be deleted from RAM disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6929) NN periodically unlinks lazy persist files with missing replicas from namespace

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156413#comment-14156413
 ] 

Hudson commented on HDFS-6929:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6929. NN periodically unlinks lazy persist files with missing replicas 
from namespace. (Arpit Agarwal) (aagarwal: rev 
2e987148e02d0087fc70ce5b1ce571d3324bf1dd)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 NN periodically unlinks lazy persist files with missing replicas from 
 namespace
 ---

 Key: HDFS-6929
 URL: https://issues.apache.org/jira/browse/HDFS-6929
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6929.01.patch, HDFS-6929.02.patch


 Occasional data loss is expected when using the lazy persist flag due to node 
 restarts. The NN will optionally unlink lazy persist files from the namespace 
 to avoid them from showing up as corrupt files.
 This behavior can be turned off with a global option. In the future this may 
 be made a per-file option controllable by the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6991) Notify NN of evicted block before deleting it from RAM disk

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156431#comment-14156431
 ] 

Hudson commented on HDFS-6991:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6991. Notify NN of evicted block before deleting it from RAM disk. (Arpit 
Agarwal) (arp: rev a18caf7753623a94a7cdb1c607cda79586de08dc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


 Notify NN of evicted block before deleting it from RAM disk
 ---

 Key: HDFS-6991
 URL: https://issues.apache.org/jira/browse/HDFS-6991
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6991.01.patch, HDFS-6991.02.patch, 
 HDFS-6991.03.patch


 Couple of bug fixes required around eviction:
 # When evicting a block from RAM disk to persistent storage, the DN should 
 schedule an incremental block report for a 'received' replica on persistent 
 storage.
 # {{BlockManager.processReportedBlock}} needs a fix to correctly update the 
 storage ID to reflect the block moving from RAM_DISK to DISK.
 Found by [~xyao] via HDFS-6950.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7153) Add storagePolicy to NN edit log during file creation

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156417#comment-14156417
 ] 

Hudson commented on HDFS-7153:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7153. Add storagePolicy to NN edit log during file creation. (Arpit 
Agarwal) (arp: rev d45e7c7e856c7103752888c0395fa94985cd7670)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java


 Add storagePolicy to NN edit log during file creation
 -

 Key: HDFS-7153
 URL: https://issues.apache.org/jira/browse/HDFS-7153
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.6.0

 Attachments: HDFS-7153.01.patch, HDFS-7153.02.patch, 
 HDFS-7153.03.patch, HDFS-7153.merge.04.patch, HDFS-7153.merge.05.patch, 
 editsStored


 Storage Policy ID is currently not logged in the NN edit log during file 
 creation as part of {{AddOp}}. This is okay for now since we don't have an 
 API to set storage policy during file creation.
 However now that we have storage policies, for HDFS-6581 we are looking into 
 using the feature instead of adding a new field to the INodeFile header. It 
 would be useful to have the ability to save policy on file create.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6978) Directory scanner should correctly reconcile blocks on RAM disk

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156395#comment-14156395
 ] 

Hudson commented on HDFS-6978:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6978. Directory scanner should correctly reconcile blocks on RAM disk. 
(Arpit Agarwal) (arp: rev 9f22fb8c9a10952225e15c7b67b5f77fa44b155d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Directory scanner should correctly reconcile blocks on RAM disk
 ---

 Key: HDFS-6978
 URL: https://issues.apache.org/jira/browse/HDFS-6978
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6978.01.patch, HDFS-6978.02.patch


 It used to be very unlikely that the directory scanner encountered two 
 replicas of the same block on different volumes.
 With memory storage, it is very likely to hit this with the following 
 sequence of events:
 # Block is written to RAM disk
 # Lazy writer saves a copy on persistent volume
 # DN attempts to evict the original replica from RAM disk, file deletion 
 fails as the replica is in use.
 # Directory scanner finds a replica on both RAM disk and persistent storage.
 The directory scanner should never delete the block on persistent storage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6927) Add unit tests

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156411#comment-14156411
 ] 

Hudson commented on HDFS-6927:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6927. Initial unit tests for Lazy Persist files. (Arpit Agarwal) 
(aagarwal: rev 3f64c4aaf00d92659ae992bfe7fe8403b4013ae6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Add unit tests
 --

 Key: HDFS-6927
 URL: https://issues.apache.org/jira/browse/HDFS-6927
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6927.01.patch


 Add a bunch of unit tests to cover flag persistence, propagation to DN, 
 ability to write replicas to RAM disk, lazy writes to disk and eviction from 
 RAM disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7143) Fix findbugs warnings in HDFS-6581 branch

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156399#comment-14156399
 ] 

Hudson commented on HDFS-7143:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7143. Fix findbugs warnings in HDFS-6581 branch. (Contributed by Tsz Wo 
Nicholas Sze) (arp: rev feda4733a8279485fc0ff1271f9c22bc44f333f6)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java


 Fix findbugs warnings in HDFS-6581 branch
 -

 Key: HDFS-7143
 URL: https://issues.apache.org/jira/browse/HDFS-7143
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 3.0.0

 Attachments: h7143_20140925.patch


 There are 4 findbugs warnings reported by Jenkins.
 https://builds.apache.org/job/PreCommit-HDFS-Build/8064/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7091) Add forwarding constructor for INodeFile for existing callers

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156419#comment-14156419
 ] 

Hudson commented on HDFS-7091:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7091. Add forwarding constructor for INodeFile for existing callers. 
(Arpit Agarwal) (arp: rev e79c98c11fa8b4ddd8c63b613698d2d508135e83)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java


 Add forwarding constructor for INodeFile for existing callers
 -

 Key: HDFS-7091
 URL: https://issues.apache.org/jira/browse/HDFS-7091
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode, test
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-7091.01.patch


 Since HDFS-6584 is in trunk we are hitting quite a few merge conflicts.
 Many of the conflicts can be avoided by some minor updates to the branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6925) DataNode should attempt to place replicas on transient storage first if lazyPersist flag is received

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156409#comment-14156409
 ] 

Hudson commented on HDFS-6925:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6925. DataNode should attempt to place replicas on transient storage first 
if lazyPersist flag is received. (Arpit Agarwal) (aagarwal: rev 
a317bd7b02c37bd57743bfad59593ec12f53f4ed)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/RoundRobinVolumeChoosingPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/AvailableSpaceVolumeChoosingPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/Replica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestWriteToReplica.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImplAllocator.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/BlockReportTestBase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteBlockGetsBlockLengthHint.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaUnderRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsTransientVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java


 DataNode should attempt to place replicas on transient storage first if 
 lazyPersist flag is received
 

 Key: HDFS-6925
 URL: https://issues.apache.org/jira/browse/HDFS-6925
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
 Environment: If the LazyPersist flag is received via 
 DataTransferProtocol then DN should attempt to place the files on RAM disk 
 first, and failing that on regular disk.
 Support for lazily moving replicas from RAM disk to persistent storage will 
 be added later.
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6925.01.patch, HDFS-6925.02.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7158) Reduce the memory usage of WebImageViewer

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156423#comment-14156423
 ] 

Hudson commented on HDFS-7158:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7158. Reduce the memory usage of WebImageViewer. Contributed by Haohui 
Mai. (wheat9: rev 1f5b42ac4881b734c799bfb527884c0d117929bd)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java


 Reduce the memory usage of WebImageViewer
 -

 Key: HDFS-7158
 URL: https://issues.apache.org/jira/browse/HDFS-7158
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.6.0

 Attachments: HDFS-7158.000.patch, HDFS-7158.001.patch, 
 HDFS-7158.002.patch


 Currently the webimageviewer can take up as much memory as the NN uses in 
 order to serve the WebHDFS requests from the client.
 This jira proposes to optimize the memory usage of webimageviewer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7144) Fix findbugs warnings in RamDiskReplicaTracker

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156401#comment-14156401
 ] 

Hudson commented on HDFS-7144:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7144. Fix findbugs warnings in RamDiskReplicaTracker. (Contributed by Tsz 
Wo Nicholas Sze) (arp: rev 364e60b1691a4d7b2f745b8ebf78177f254a4287)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Fix findbugs warnings in RamDiskReplicaTracker
 --

 Key: HDFS-7144
 URL: https://issues.apache.org/jira/browse/HDFS-7144
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Fix For: 3.0.0

 Attachments: h7144_20140925.patch


 Two more findbugs warnings:
 - Bad practice Warnings
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaTracker$RamDiskReplica.deleteSavedFiles()
  ignores exceptional return value of java.io.File.delete()
 Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details)
 In class 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaTracker$RamDiskReplica
 In method 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaTracker$RamDiskReplica.deleteSavedFiles()
 Called method java.io.File.delete()
 At RamDiskReplicaTracker.java:[line 122]
 Another occurrence at RamDiskReplicaTracker.java:[line 127]
 - Dodgy Warnings
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaLruTracker$RamDiskReplicaLru
  doesn't override RamDiskReplicaTracker$RamDiskReplica.equals(Object)
 Bug type EQ_DOESNT_OVERRIDE_EQUALS (click for details)
 In class 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaLruTracker$RamDiskReplicaLru
 Did you intend to override 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaTracker$RamDiskReplica.equals(Object)
 At RamDiskReplicaLruTracker.java:[lines 37-42]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156397#comment-14156397
 ] 

Hudson commented on HDFS-6581:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6950. Add Additional unit tests for HDFS-6581. (Contributed by Xiaoyu Yao) 
(arp: rev 762b04e9943d6a05e1130fc81ada5b5dc8baab2c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
HDFS-7064. Fix unit test failures in HDFS-6581 branch. (Contributed by Xiaoyu 
Yao) (arp: rev 4603e4481f0486afcce6b106d4a92a6e90e5b6d9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
HDFS-7079. Few more unit test fixes for HDFS-6581. (Arpit Agarwal) (arp: rev 
dcbc46730131a1bdf8416efeb4571794e5c8e369)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
HDFS-7143. Fix findbugs warnings in HDFS-6581 branch. (Contributed by Tsz Wo 
Nicholas Sze) (arp: rev feda4733a8279485fc0ff1271f9c22bc44f333f6)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
HDFS-7171. Fix Jenkins failures in HDFS-6581 branch. (Arpit Agarwal) (arp: rev 
a45ad330facc56f06ed42eb71304c49ef56dc549)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
HDFS-6581. Update CHANGES.txt in preparation for trunk merge (arp: rev 
04b08431a3446300f4715cf135f0e60f85e5bf5a)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Write to single replica in memory
 -

 Key: HDFS-6581
 URL: https://issues.apache.org/jira/browse/HDFS-6581
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6581.merge.01.patch, HDFS-6581.merge.02.patch, 
 HDFS-6581.merge.03.patch, HDFS-6581.merge.04.patch, HDFS-6581.merge.05.patch, 
 HDFS-6581.merge.06.patch, HDFS-6581.merge.07.patch, HDFS-6581.merge.08.patch, 
 HDFS-6581.merge.09.patch, HDFS-6581.merge.10.patch, HDFS-6581.merge.11.patch, 
 HDFS-6581.merge.12.patch, HDFS-6581.merge.14.patch, HDFS-6581.merge.15.patch, 
 HDFSWriteableReplicasInMemory.pdf, 
 Test-Plan-for-HDFS-6581-Memory-Storage.pdf, 
 Test-Plan-for-HDFS-6581-Memory-Storage.pdf


 Per discussion with the community on HDFS-5851, we will implement writing to 
 a single replica in DN memory via DataTransferProtocol.
 This avoids some of the issues with short-circuit writes, which we can 
 revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7171) Fix Jenkins failures in HDFS-6581 branch

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156437#comment-14156437
 ] 

Hudson commented on HDFS-7171:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7171. Fix Jenkins failures in HDFS-6581 branch. (Arpit Agarwal) (arp: rev 
a45ad330facc56f06ed42eb71304c49ef56dc549)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Fix Jenkins failures in HDFS-6581 branch
 

 Key: HDFS-7171
 URL: https://issues.apache.org/jira/browse/HDFS-7171
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-7171.01.patch


 Jenkins flagged a few failures with the latest merge patch.
 {quote}
 Test results: 
 https://builds.apache.org/job/PreCommit-HDFS-Build/8269//testReport/
 Findbugs warnings: 
 https://builds.apache.org/job/PreCommit-HDFS-Build/8269//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6924) Add new RAM_DISK storage type

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156405#comment-14156405
 ] 

Hudson commented on HDFS-6924:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6924. Add new RAM_DISK storage type. (Arpit Agarwal) (aagarwal: rev 
7f49537ba18f830dff172f5f9c4a387fe7ab410f)
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/StorageType.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java


 Add new RAM_DISK storage type
 -

 Key: HDFS-6924
 URL: https://issues.apache.org/jira/browse/HDFS-6924
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6924.01.patch


 Add a new RAM_DISK storage type which could be backed by tmpfs/ramfs on Linux 
 or alternative RAM disk on other platforms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7155) Bugfix in createLocatedFileStatus caused by bad merge

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156415#comment-14156415
 ] 

Hudson commented on HDFS-7155:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7155. Bugfix in createLocatedFileStatus caused by bad merge. (Arpit 
Agarwal) (arp: rev a2d4edacea943c98fe4430e295627cd7535948fc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


 Bugfix in createLocatedFileStatus caused by bad merge
 -

 Key: HDFS-7155
 URL: https://issues.apache.org/jira/browse/HDFS-7155
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-7155.01.patch


 FsDirectory.createLocatedFileStatus fails to initialize the blockSize.
 Likely caused by a bad merge.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7084) FsDatasetImpl#copyBlockFiles debug log can be improved

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156433#comment-14156433
 ] 

Hudson commented on HDFS-7084:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7084. FsDatasetImpl#copyBlockFiles debug log can be improved. (Contributed 
by Xiaoyu Yao) (arp: rev 5e4627d0fb2a2d608c0e67fd6ad835523ed3259d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 FsDatasetImpl#copyBlockFiles debug log can be improved
 --

 Key: HDFS-7084
 URL: https://issues.apache.org/jira/browse/HDFS-7084
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-7084.0.patch


 addBlock: Moved should be replaced with Copied  or lazyPersistReplica : 
 Copied to avoid confusion.
 {code}
   static File[] copyBlockFiles(long blockId, long genStamp,
File srcMeta, File srcFile, File destRoot)
   {
... 
 if (LOG.isDebugEnabled()) {
   LOG.debug(addBlock: Moved  + srcMeta +  to  + dstMeta);
   LOG.debug(addBlock: Moved  + srcFile +  to  + dstFile);
 }
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7176) The namenode usage message doesn't include -rollingupgrade started

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156436#comment-14156436
 ] 

Hudson commented on HDFS-7176:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7176. The namenode usage message doesn't include -rollingupgrade started 
(cmccabe) (cmccabe: rev dd1b8f2ed8a86871517c730a9f370aca4b763514)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java


 The namenode usage message doesn't include -rollingupgrade started
 

 Key: HDFS-7176
 URL: https://issues.apache.org/jira/browse/HDFS-7176
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.6.0

 Attachments: HDFS-7176.001.patch


 HDFS-6031 re-added the -rollingUpgrade started option for the NameNode, but 
 the help text still doesn't include it:
 {code}
 Usage: java NameNode [-backup] | 
 [-checkpoint] | 
 [-format [-clusterid cid ] [-force] [-nonInteractive] ] | 
 [-upgrade [-clusterid cid] [-renameReservedk-v pairs] ] | 
 [-rollback] | 
 [-rollingUpgrade downgrade|rollback ] | 
 [-finalize] | 
 ...
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7080) Fix finalize and upgrade unit test failures

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156429#comment-14156429
 ] 

Hudson commented on HDFS-7080:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7080. Fix finalize and upgrade unit test failures. (Arpit Agarwal) (arp: 
rev 4eab083b1b7faf4485274d1d30256cde08e11915)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSFinalize.java


 Fix finalize and upgrade unit test failures
 ---

 Key: HDFS-7080
 URL: https://issues.apache.org/jira/browse/HDFS-7080
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-7080.01.patch, HDFS-7080.02.patch


 Fix following test failures in the branch:
 # TestDFSFinalize
 # TestDFSUpgrade



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6932) Balancer and Mover tools should ignore replicas on RAM_DISK

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156391#comment-14156391
 ] 

Hudson commented on HDFS-6932:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6932. Balancer and Mover tools should ignore replicas on RAM_DISK. 
(Contributed by Xiaoyu Yao) (arp: rev e8e7fbe81abc64a9ae3d2f3f62c088426073b2bf)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/StorageType.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java


 Balancer and Mover tools should ignore replicas on RAM_DISK
 ---

 Key: HDFS-6932
 URL: https://issues.apache.org/jira/browse/HDFS-6932
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-6932.0.patch, HDFS-6932.1.patch, HDFS-6932.2.patch, 
 HDFS-6932.3.patch


 Per title, balancer and mover should just ignore replicas on RAM disk instead 
 of attempting to move them to other nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6894) Add XDR parser method for each NFS response

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156425#comment-14156425
 ] 

Hudson commented on HDFS-6894:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6894. Add XDR parser method for each NFS response. Contributed by Brandon 
Li. (wheat9: rev 875aa797caee96572162ff59bc50cf97d1195348)
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/READDIRPLUS3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/LINK3Response.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/CREATE3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/FSINFO3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/WccAttr.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/REMOVE3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/SYMLINK3Response.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/RENAME3Response.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/COMMIT3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/PATHCONF3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/LOOKUP3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/WRITE3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/READLINK3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/MKDIR3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/NFS3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/WccData.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/SETATTR3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/READDIR3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/GETATTR3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/READ3Response.java
* 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/RMDIR3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/ACCESS3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/FSSTAT3Response.java
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/nfs3/response/MKNOD3Response.java


 Add XDR parser method for each NFS response
 ---

 Key: HDFS-6894
 URL: https://issues.apache.org/jira/browse/HDFS-6894
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.6.0

 Attachments: HDFS-6894.001.patch, HDFS-6894.001.patch, 
 HDFS-6894.002.patch


 This can be an abstract method in NFS3Response to force the subclasses to 
 implement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6930) Improve replica eviction from RAM disk

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156393#comment-14156393
 ] 

Hudson commented on HDFS-6930:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6930. Improve replica eviction from RAM disk. (Arpit Agarwal) (arp: rev 
cb9b485075ce773f2d6189aa2f54bbc69aad4dab)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java


 Improve replica eviction from RAM disk
 --

 Key: HDFS-6930
 URL: https://issues.apache.org/jira/browse/HDFS-6930
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6930.01.patch, HDFS-6930.02.patch


 The current replica eviction scheme is inefficient since it performs multiple 
 file operations in the context of block allocation.
 A better implementation would be asynchronous eviction when free space on RAM 
 disk falls below a low watermark to make block allocation faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6926) DN support for saving replicas to persistent storage and evicting in-memory replicas

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156407#comment-14156407
 ] 

Hudson commented on HDFS-6926:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6926. DN support for saving replicas to persistent storage and evicting 
in-memory replicas. (Arpit Agarwal) (aagarwal: rev 
eb448e14399e17f11b9e523e4050de245b9b0408)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/AvailableSpaceVolumeChoosingPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java


 DN support for saving replicas to persistent storage and evicting in-memory 
 replicas
 

 Key: HDFS-6926
 URL: https://issues.apache.org/jira/browse/HDFS-6926
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6926.01.patch


 Add the following:
 # A lazy writer on the DN to move replicas from RAM disk to persistent 
 storage.
 # 'Evict' persisted replicas from RAM disk to make space for new blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7172) Test data files may be checked out of git with incorrect line endings, causing test failures in TestHDFSCLI.

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156438#comment-14156438
 ] 

Hudson commented on HDFS-7172:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7172. Test data files may be checked out of git with incorrect line 
endings, causing test failures in TestHDFSCLI. Contributed by Chris Nauroth. 
(wheat9: rev 737f280ddeed58a2b1cd42c29533a01e7c6c3426)
* hadoop-hdfs-project/hadoop-hdfs/.gitattributes
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
Addendum patch for HDFS-7172. Contributed by Chris Nauroth. (wheat9: rev 
8dfe54f6d3e2f14584846de29ee06ed280bc0f0e)
* hadoop-hdfs-project/hadoop-hdfs/pom.xml


 Test data files may be checked out of git with incorrect line endings, 
 causing test failures in TestHDFSCLI.
 

 Key: HDFS-7172
 URL: https://issues.apache.org/jira/browse/HDFS-7172
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Trivial
 Fix For: 2.6.0

 Attachments: HDFS-7172.1.patch, HDFS-7172.rat.patch


 {{TestHDFSCLI}} uses several files at src/test/resource/data* as test input 
 files.  Some of the tests expect a specific length for these files.  If they 
 get checked out of git with CRLF line endings by mistake, then the test 
 assertions will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6990) Add unit test for evict/delete RAM_DISK block with open handle

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156430#comment-14156430
 ] 

Hudson commented on HDFS-6990:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6990. Add unit test for evict/delete RAM_DISK block with open handle. 
(Contributed by Xiaoyu Yao) (arp: rev 8b139b0800b2724178d5c155842588e9593a939f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestScrLazyPersistFiles.java


 Add unit test for evict/delete RAM_DISK block with open handle
 --

 Key: HDFS-6990
 URL: https://issues.apache.org/jira/browse/HDFS-6990
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-6990.0.patch, HDFS-6990.1.patch, HDFS-6990.2.patch, 
 HDFS-6990.3.patch


 This is to verify:
 * Evict RAM_DISK block with open handle should fall back to DISK.
 * Delete RAM_DISK block (persisted) with open handle should mark the block to 
 be deleted upon handle close. 
 Simply open handle to file in DFS name space won't work as expected. We need 
 a local FS file handle to the block file. The only meaningful case is for 
 Short Circuit Read. This JIRA is to validate/enable the two cases with SCR 
 enabled MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6950) Add Additional unit tests for HDFS-6581

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156426#comment-14156426
 ] 

Hudson commented on HDFS-6950:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6950. Add Additional unit tests for HDFS-6581. (Contributed by Xiaoyu Yao) 
(arp: rev 762b04e9943d6a05e1130fc81ada5b5dc8baab2c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java


 Add Additional unit tests for HDFS-6581
 ---

 Key: HDFS-6950
 URL: https://issues.apache.org/jira/browse/HDFS-6950
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-6950.0.patch, HDFS-6950.1.patch, HDFS-6950.2.patch


 Create additional unit tests for HDFS-6581 in addition to existing ones in 
 HDFS-6927.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7066) LazyWriter#evictBlocks misses a null check for replicaState

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156434#comment-14156434
 ] 

Hudson commented on HDFS-7066:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7066. LazyWriter#evictBlocks misses a null check for replicaState. 
(Contributed by Xiaoyu Yao) (arp: rev a4dcbaa33255cd1dd8d6c54763f55486c9e4317c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 LazyWriter#evictBlocks misses a null check for replicaState
 ---

 Key: HDFS-7066
 URL: https://issues.apache.org/jira/browse/HDFS-7066
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-7066.0.patch


 LazyWriter#evictBlocks (added for HDFS-6581) misses a null check for 
 replicaState. As a result, there are many NPEs in the debug log under certain 
 conditions. 
 {code}
 2014-09-15 14:27:10,820 DEBUG impl.FsDatasetImpl 
 (FsDatasetImpl.java:evictBlocks(2335)) - Evicting block null
 2014-09-15 14:27:10,821 WARN  impl.FsDatasetImpl 
 (FsDatasetImpl.java:run(2409)) - Ignoring exception in LazyWriter:
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.evictBlocks(FsDatasetImpl.java:2343)
   at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl$LazyWriter.run(FsDatasetImpl.java:2396)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 The proposed fix is to break if there is no candidate available to evict.
 {code}
   while (iterations++  MAX_BLOCK_EVICTIONS_PER_ITERATION 
 transientFreeSpaceBelowThreshold()) {
LazyWriteReplicaTracker.ReplicaState replicaState =
lazyWriteReplicaTracker.getNextCandidateForEviction();

   if (replicaState == null) {
   break;
}

if (LOG.isDebugEnabled()) {
  LOG.debug(Evicting block  + replicaState);
}
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6921) Add LazyPersist flag to FileStatus

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156404#comment-14156404
 ] 

Hudson commented on HDFS-6921:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6921. Add LazyPersist flag to FileStatus. (Arpit Agarwal) (aagarwal: rev 
a7bcc9535860214380e235641d1d5d2dd15aee58)
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java


 Add LazyPersist flag to FileStatus
 --

 Key: HDFS-6921
 URL: https://issues.apache.org/jira/browse/HDFS-6921
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6921.01.patch, HDFS-6921.02.patch


 A new flag will be added to FileStatus to indicate that a file can be lazily 
 persisted to disk i.e. trading reduced durability for better write 
 performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6928) 'hdfs put' command should accept lazyPersist flag for testing

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156410#comment-14156410
 ] 

Hudson commented on HDFS-6928:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6928. 'hdfs put' command should accept lazyPersist flag for testing. 
(Arpit Agarwal) (arp: rev bbaa7dc28db75d9b3700c6ff95222d8e1de29c15)
* hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Stat.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml


 'hdfs put' command should accept lazyPersist flag for testing
 -

 Key: HDFS-6928
 URL: https://issues.apache.org/jira/browse/HDFS-6928
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Tassapol Athiapinya
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6928.01.patch, HDFS-6928.02.patch, 
 HDFS-6928.03.patch


 Add a '-l' flag to 'hdfs put' which creates the file with the LAZY_PERSIST 
 option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6922) Add LazyPersist flag to INodeFile, save it in FsImage and edit logs

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156402#comment-14156402
 ] 

Hudson commented on HDFS-6922:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6922. Add LazyPersist flag to INodeFile, save it in FsImage and edit logs. 
(Arpit Agarwal) (aagarwal: rev 042b33f20b01aadb5cd03da731ae7a3d94026aac)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/CreateEditsLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java


 Add LazyPersist flag to INodeFile, save it in FsImage and edit logs
 ---

 Key: HDFS-6922
 URL: https://issues.apache.org/jira/browse/HDFS-6922
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6922.01.patch, HDFS-6922.02.patch


 Support for saving the LazyPersist flag in the FsImage and edit logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7064) Fix unit test failures in HDFS-6581 branch

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156432#comment-14156432
 ] 

Hudson commented on HDFS-7064:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7064. Fix unit test failures in HDFS-6581 branch. (Contributed by Xiaoyu 
Yao) (arp: rev 4603e4481f0486afcce6b106d4a92a6e90e5b6d9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java


 Fix unit test failures in HDFS-6581 branch
 --

 Key: HDFS-7064
 URL: https://issues.apache.org/jira/browse/HDFS-7064
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-7064.0.patch, HDFS-7064.1.patch, HDFS-7064.2.patch


 Fix test failures in the HDFS-6581 feature branch.
 Jenkins flagged the following failures.
 https://builds.apache.org/job/PreCommit-HDFS-Build/8025//testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6923) Propagate LazyPersist flag to DNs via DataTransferProtocol

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156406#comment-14156406
 ] 

Hudson commented on HDFS-6923:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6923. Propagate LazyPersist flag to DNs via DataTransferProtocol. (Arpit 
Agarwal) (aagarwal: rev c2354a7f81ff5a48a5b65d25e1036d3e0ba86420)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java


 Propagate LazyPersist flag to DNs via DataTransferProtocol
 --

 Key: HDFS-6923
 URL: https://issues.apache.org/jira/browse/HDFS-6923
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6923.01.patch, HDFS-6923.02.patch


 If the LazyPersist flag is set in the file properties, the DFSClient will 
 propagate it to the DataNode via DataTransferProtocol.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6977) Delete all copies when a block is deleted from the block space

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156400#comment-14156400
 ] 

Hudson commented on HDFS-6977:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6977. Delete all copies when a block is deleted from the block space. 
(Arpit Agarwal) (arp: rev ccdf0054a354fc110124b83de742c2ee6076449e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


 Delete all copies when a block is deleted from the block space
 --

 Key: HDFS-6977
 URL: https://issues.apache.org/jira/browse/HDFS-6977
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Nathan Yao
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6977.01.patch, HDFS-6977.02.patch, 
 HDFS-6977.03.patch


 When a block is deleted from RAM disk we should also delete the copies 
 written to lazyPersist/.
 Reported by [~xyao]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6754) TestNamenodeCapacityReport.testXceiverCount may sometimes fail due to lack of retry

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156418#comment-14156418
 ] 

Hudson commented on HDFS-6754:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6754. TestNamenodeCapacityReport.testXceiverCount may sometimes fail due 
to lack of retry. Contributed by Mit Desai. (kihwal: rev 
3f25d916d5539917092e2f52a8c2df2cfd647c3c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeCapacityReport.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 TestNamenodeCapacityReport.testXceiverCount may sometimes fail due to lack of 
 retry
 ---

 Key: HDFS-6754
 URL: https://issues.apache.org/jira/browse/HDFS-6754
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Mit Desai
Assignee: Mit Desai
 Fix For: 2.6.0

 Attachments: HDFS-6754.patch, HDFS-6754.patch


 I have seen TestNamenodeCapacityReport.testXceiverCount fail intermittently 
 in our nightly builds with the following error:
 {noformat}
 java.io.IOException: Unable to close file because the last block does not 
 have enough number of replicas.
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2151)
   at 
 org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2119)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.testXceiverCount(TestNamenodeCapacityReport.java:281)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7079) Few more unit test fixes for HDFS-6581

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156422#comment-14156422
 ] 

Hudson commented on HDFS-7079:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7079. Few more unit test fixes for HDFS-6581. (Arpit Agarwal) (arp: rev 
dcbc46730131a1bdf8416efeb4571794e5c8e369)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java


 Few more unit test fixes for HDFS-6581
 --

 Key: HDFS-7079
 URL: https://issues.apache.org/jira/browse/HDFS-7079
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-7079.03.patch, HDFS-7079.04.patch


 Fix a few more test cases flagged by Jenkins:
 # TestFsShellCopy
 # TestCopy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7159) Use block storage policy to set lazy persist preference

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156424#comment-14156424
 ] 

Hudson commented on HDFS-7159:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7159. Use block storage policy to set lazy persist preference. (Arpit 
Agarwal) (arp: rev bb84f1fccb18c6c7373851e05d2451d55e908242)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/PBImageXmlWriter.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/BlockStoragePolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImplAllocator.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Stat.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsTransientVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/StorageType.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestScrLazyPersistFiles.java
* hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 

[jira] [Commented] (HDFS-7108) Fix unit test failures in SimulatedFsDataset

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156416#comment-14156416
 ] 

Hudson commented on HDFS-7108:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7108. Fix unit test failures in SimulatedFsDataset. (Arpit Agarwal) (arp: 
rev 50b321068d32d404cc9b5d392f0e20d48cabbf2b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Fix unit test failures in SimulatedFsDataset
 

 Key: HDFS-7108
 URL: https://issues.apache.org/jira/browse/HDFS-7108
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-7108.01.patch


 HDFS-7100 introduced a few unit test failures due to 
 UnsupportedOperationException in {{SimulatedFsDataset.getVolume}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6134) Transparent data at rest encryption

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156439#comment-14156439
 ] 

Hudson commented on HDFS-6134:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following 
merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-mapreduce-project/CHANGES.txt


 Transparent data at rest encryption
 ---

 Key: HDFS-6134
 URL: https://issues.apache.org/jira/browse/HDFS-6134
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0, 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Charles Lamb
 Fix For: 2.6.0

 Attachments: HDFS-6134.001.patch, HDFS-6134.002.patch, 
 HDFS-6134_test_plan.pdf, HDFSDataatRestEncryption.pdf, 
 HDFSDataatRestEncryptionProposal_obsolete.pdf, 
 HDFSEncryptionConceptualDesignProposal-2014-06-20.pdf, 
 fs-encryption.2014-08-18.patch, fs-encryption.2014-08-19.patch


 Because of privacy and security regulations, for many industries, sensitive 
 data at rest must be in encrypted form. For example: the health­care industry 
 (HIPAA regulations), the card payment industry (PCI DSS regulations) or the 
 US government (FISMA regulations).
 This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can 
 be used transparently by any application accessing HDFS via Hadoop Filesystem 
 Java API, Hadoop libhdfs C library, or WebHDFS REST API.
 The resulting implementation should be able to be used in compliance with 
 different regulation requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6482) Use block ID-based block layout on datanodes

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156398#comment-14156398
 ] 

Hudson commented on HDFS-6482:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6482. Fix CHANGES.txt in trunk (arp: rev 
be30c86cc9f71894dc649ed22983e5c42e9b6951)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Use block ID-based block layout on datanodes
 

 Key: HDFS-6482
 URL: https://issues.apache.org/jira/browse/HDFS-6482
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0
Reporter: James Thomas
Assignee: James Thomas
 Fix For: 2.6.0

 Attachments: 6482-design.doc, HDFS-6482.1.patch, HDFS-6482.2.patch, 
 HDFS-6482.3.patch, HDFS-6482.4.patch, HDFS-6482.5.patch, HDFS-6482.6.patch, 
 HDFS-6482.7.patch, HDFS-6482.8.patch, HDFS-6482.9.patch, HDFS-6482.patch, 
 hadoop-24-datanode-dir.tgz


 Right now blocks are placed into directories that are split into many 
 subdirectories when capacity is reached. Instead we can use a block's ID to 
 determine the path it should go in. This eliminates the need for the LDir 
 data structure that facilitates the splitting of directories when they reach 
 capacity as well as fields in ReplicaInfo that keep track of a replica's 
 location.
 An extension of the work in HDFS-3290.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7129) Metrics to track usage of memory for writes

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156428#comment-14156428
 ] 

Hudson commented on HDFS-7129:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7129. Metrics to track usage of memory for writes. (Contributed by Xiaoyu 
Yao) (arp: rev 5e8b6973527e5f714652641ed95e8a4509e18cfa)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/JMXGet.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java


 Metrics to track usage of memory for writes
 ---

 Key: HDFS-7129
 URL: https://issues.apache.org/jira/browse/HDFS-7129
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-7129.0.patch, HDFS-7129.1.patch, HDFS-7129.2.patch, 
 HDFS-7129.3.patch


 A few metrics to evaluate feature usage and suggest improvements. Thanks to 
 [~sureshms] for some of these suggestions.
 # Number of times a block in memory was read (before being ejected)
 # Average block size for data written to memory tier
 # Time the block was in memory before being ejected
 # Number of blocks written to memory
 # Number of memory writes requested but not satisfied (failed-over to disk)
 # Number of blocks evicted without ever being read from memory
 # Average delay between memory write and disk write (window where a node 
 restart could cause data loss).
 # Replicas written to disk by lazy writer
 # Bytes written to disk by lazy writer
 # Replicas deleted by application before being persisted to disk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7071) Updated editsStored and editsStored.xml to bump layout version and add LazyPersist flag

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156420#comment-14156420
 ] 

Hudson commented on HDFS-7071:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-7071. Updated editsStored and editsStored.xml to bump layout version and 
add LazyPersist flag. (Contributed by Xiaoyu Yao and Arpit Agarwal) (arp: rev 
486a76a39ba236072c2bb22af509a1ae8081093e)
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
HDFS-7071. Undo accidental commit of binary file editsStored. (arp: rev 
8c9860f7c96322908f344d25ef31939739e7df9d)
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored


 Updated editsStored and editsStored.xml to bump layout version and add 
 LazyPersist flag
 ---

 Key: HDFS-7071
 URL: https://issues.apache.org/jira/browse/HDFS-7071
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS-6581
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-7071.0.patch, HDFS-7071.02.patch, editsStored, 
 editsStored


 TestOfflineEditsViewer for Layz_Persist, and also need update for two 
 reference version of editsStored (binary) and editsStored.xml in 
 hadoop-hdfs/src/test/resources
 The fix is to add 
 {code}
  LAZY_PERSISTfalse/LAZY_PERSIST
 {code}
 to editsStore.xml for AddClose OPs and then use the following command to 
 generate the binary file editsStore.
 {code}
 hdfs oev -p binary -i editsStored.xml -o editStored
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6960) Bugfix in LazyWriter, fix test case and some refactoring

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156414#comment-14156414
 ] 

Hudson commented on HDFS-6960:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/])
HDFS-6960. Bugfix in LazyWriter, fix test case and some refactoring. (Arpit 
Agarwal) (arp: rev 4cf9afacbe3d0814fb616d238aa9b16b1ae68386)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java


 Bugfix in LazyWriter, fix test case and some refactoring
 

 Key: HDFS-6960
 URL: https://issues.apache.org/jira/browse/HDFS-6960
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, test
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6960.01.patch, HDFS-6960.02.patch


 LazyWriter has a bug. While saving the replica to disk we would save it under 
 {{current/lazyPersist/}}. Instead it should be saved under the appropriate 
 subdirectory e.g. {{current/lazyPersist/subdir1/subdir0/}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7100) Make eviction scheme pluggable

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156506#comment-14156506
 ] 

Hudson commented on HDFS-7100:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/])
HDFS-7100. Make eviction scheme pluggable. (Arpit Agarwal) (arp: rev 
b2d5ed36bcb80e2581191dcdc3976e825c959142)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Make eviction scheme pluggable
 --

 Key: HDFS-7100
 URL: https://issues.apache.org/jira/browse/HDFS-7100
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-7100.01.patch


 We can make the eviction scheme pluggable to help evaluate multiple schemes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6978) Directory scanner should correctly reconcile blocks on RAM disk

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156511#comment-14156511
 ] 

Hudson commented on HDFS-6978:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/])
HDFS-6978. Directory scanner should correctly reconcile blocks on RAM disk. 
(Arpit Agarwal) (arp: rev 9f22fb8c9a10952225e15c7b67b5f77fa44b155d)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


 Directory scanner should correctly reconcile blocks on RAM disk
 ---

 Key: HDFS-6978
 URL: https://issues.apache.org/jira/browse/HDFS-6978
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6978.01.patch, HDFS-6978.02.patch


 It used to be very unlikely that the directory scanner encountered two 
 replicas of the same block on different volumes.
 With memory storage, it is very likely to hit this with the following 
 sequence of events:
 # Block is written to RAM disk
 # Lazy writer saves a copy on persistent volume
 # DN attempts to evict the original replica from RAM disk, file deletion 
 fails as the replica is in use.
 # Directory scanner finds a replica on both RAM disk and persistent storage.
 The directory scanner should never delete the block on persistent storage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6932) Balancer and Mover tools should ignore replicas on RAM_DISK

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156507#comment-14156507
 ] 

Hudson commented on HDFS-6932:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/])
HDFS-6932. Balancer and Mover tools should ignore replicas on RAM_DISK. 
(Contributed by Xiaoyu Yao) (arp: rev e8e7fbe81abc64a9ae3d2f3f62c088426073b2bf)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/StorageType.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java


 Balancer and Mover tools should ignore replicas on RAM_DISK
 ---

 Key: HDFS-6932
 URL: https://issues.apache.org/jira/browse/HDFS-6932
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
 Fix For: 3.0.0

 Attachments: HDFS-6932.0.patch, HDFS-6932.1.patch, HDFS-6932.2.patch, 
 HDFS-6932.3.patch


 Per title, balancer and mover should just ignore replicas on RAM disk instead 
 of attempting to move them to other nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6482) Use block ID-based block layout on datanodes

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156514#comment-14156514
 ] 

Hudson commented on HDFS-6482:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/])
HDFS-6482. Fix CHANGES.txt in trunk (arp: rev 
be30c86cc9f71894dc649ed22983e5c42e9b6951)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Use block ID-based block layout on datanodes
 

 Key: HDFS-6482
 URL: https://issues.apache.org/jira/browse/HDFS-6482
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0
Reporter: James Thomas
Assignee: James Thomas
 Fix For: 2.6.0

 Attachments: 6482-design.doc, HDFS-6482.1.patch, HDFS-6482.2.patch, 
 HDFS-6482.3.patch, HDFS-6482.4.patch, HDFS-6482.5.patch, HDFS-6482.6.patch, 
 HDFS-6482.7.patch, HDFS-6482.8.patch, HDFS-6482.9.patch, HDFS-6482.patch, 
 hadoop-24-datanode-dir.tgz


 Right now blocks are placed into directories that are split into many 
 subdirectories when capacity is reached. Instead we can use a block's ID to 
 determine the path it should go in. This eliminates the need for the LDir 
 data structure that facilitates the splitting of directories when they reach 
 capacity as well as fields in ReplicaInfo that keep track of a replica's 
 location.
 An extension of the work in HDFS-3290.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6930) Improve replica eviction from RAM disk

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156509#comment-14156509
 ] 

Hudson commented on HDFS-6930:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/])
HDFS-6930. Improve replica eviction from RAM disk. (Arpit Agarwal) (arp: rev 
cb9b485075ce773f2d6189aa2f54bbc69aad4dab)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


 Improve replica eviction from RAM disk
 --

 Key: HDFS-6930
 URL: https://issues.apache.org/jira/browse/HDFS-6930
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6930.01.patch, HDFS-6930.02.patch


 The current replica eviction scheme is inefficient since it performs multiple 
 file operations in the context of block allocation.
 A better implementation would be asynchronous eviction when free space on RAM 
 disk falls below a low watermark to make block allocation faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6977) Delete all copies when a block is deleted from the block space

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156516#comment-14156516
 ] 

Hudson commented on HDFS-6977:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/])
HDFS-6977. Delete all copies when a block is deleted from the block space. 
(Arpit Agarwal) (arp: rev ccdf0054a354fc110124b83de742c2ee6076449e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/LazyWriteReplicaTracker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java


 Delete all copies when a block is deleted from the block space
 --

 Key: HDFS-6977
 URL: https://issues.apache.org/jira/browse/HDFS-6977
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Nathan Yao
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6977.01.patch, HDFS-6977.02.patch, 
 HDFS-6977.03.patch


 When a block is deleted from RAM disk we should also delete the copies 
 written to lazyPersist/.
 Reported by [~xyao]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7143) Fix findbugs warnings in HDFS-6581 branch

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156515#comment-14156515
 ] 

Hudson commented on HDFS-7143:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/])
HDFS-7143. Fix findbugs warnings in HDFS-6581 branch. (Contributed by Tsz Wo 
Nicholas Sze) (arp: rev feda4733a8279485fc0ff1271f9c22bc44f333f6)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java


 Fix findbugs warnings in HDFS-6581 branch
 -

 Key: HDFS-7143
 URL: https://issues.apache.org/jira/browse/HDFS-7143
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 3.0.0

 Attachments: h7143_20140925.patch


 There are 4 findbugs warnings reported by Jenkins.
 https://builds.apache.org/job/PreCommit-HDFS-Build/8064/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6931) Move lazily persisted replicas to finalized directory on DN startup

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156508#comment-14156508
 ] 

Hudson commented on HDFS-6931:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/])
HDFS-6931. Move lazily persisted replicas to finalized directory on DN startup. 
(Arpit Agarwal) (arp: rev c92837aeab5188f6171d4016f91b3b4936a66beb)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java


 Move lazily persisted replicas to finalized directory on DN startup
 ---

 Key: HDFS-6931
 URL: https://issues.apache.org/jira/browse/HDFS-6931
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6931.01.patch


 On restart the DN should move replicas from the {{current/lazyPersist/}} 
 directory to {{current/finalized}}. Duplicate replicas of the same block 
 should be deleted from RAM disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6581) Write to single replica in memory

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156513#comment-14156513
 ] 

Hudson commented on HDFS-6581:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/])
HDFS-6950. Add Additional unit tests for HDFS-6581. (Contributed by Xiaoyu Yao) 
(arp: rev 762b04e9943d6a05e1130fc81ada5b5dc8baab2c)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
HDFS-7064. Fix unit test failures in HDFS-6581 branch. (Contributed by Xiaoyu 
Yao) (arp: rev 4603e4481f0486afcce6b106d4a92a6e90e5b6d9)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataDirs.java
HDFS-7079. Few more unit test fixes for HDFS-6581. (Arpit Agarwal) (arp: rev 
dcbc46730131a1bdf8416efeb4571794e5c8e369)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
HDFS-7143. Fix findbugs warnings in HDFS-6581 branch. (Contributed by Tsz Wo 
Nicholas Sze) (arp: rev feda4733a8279485fc0ff1271f9c22bc44f333f6)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
HDFS-7171. Fix Jenkins failures in HDFS-6581 branch. (Arpit Agarwal) (arp: rev 
a45ad330facc56f06ed42eb71304c49ef56dc549)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/mover/TestStorageMover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
HDFS-6581. Update CHANGES.txt in preparation for trunk merge (arp: rev 
04b08431a3446300f4715cf135f0e60f85e5bf5a)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Write to single replica in memory
 -

 Key: HDFS-6581
 URL: https://issues.apache.org/jira/browse/HDFS-6581
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, namenode
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6581.merge.01.patch, HDFS-6581.merge.02.patch, 
 HDFS-6581.merge.03.patch, HDFS-6581.merge.04.patch, HDFS-6581.merge.05.patch, 
 HDFS-6581.merge.06.patch, HDFS-6581.merge.07.patch, HDFS-6581.merge.08.patch, 
 HDFS-6581.merge.09.patch, HDFS-6581.merge.10.patch, HDFS-6581.merge.11.patch, 
 HDFS-6581.merge.12.patch, HDFS-6581.merge.14.patch, HDFS-6581.merge.15.patch, 
 HDFSWriteableReplicasInMemory.pdf, 
 Test-Plan-for-HDFS-6581-Memory-Storage.pdf, 
 Test-Plan-for-HDFS-6581-Memory-Storage.pdf


 Per discussion with the community on HDFS-5851, we will implement writing to 
 a single replica in DN memory via DataTransferProtocol.
 This avoids some of the issues with short-circuit writes, which we can 
 revisit at a later time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6923) Propagate LazyPersist flag to DNs via DataTransferProtocol

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156522#comment-14156522
 ] 

Hudson commented on HDFS-6923:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/])
HDFS-6923. Propagate LazyPersist flag to DNs via DataTransferProtocol. (Arpit 
Agarwal) (aagarwal: rev c2354a7f81ff5a48a5b65d25e1036d3e0ba86420)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


 Propagate LazyPersist flag to DNs via DataTransferProtocol
 --

 Key: HDFS-6923
 URL: https://issues.apache.org/jira/browse/HDFS-6923
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6923.01.patch, HDFS-6923.02.patch


 If the LazyPersist flag is set in the file properties, the DFSClient will 
 propagate it to the DataNode via DataTransferProtocol.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7108) Fix unit test failures in SimulatedFsDataset

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156532#comment-14156532
 ] 

Hudson commented on HDFS-7108:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/])
HDFS-7108. Fix unit test failures in SimulatedFsDataset. (Arpit Agarwal) (arp: 
rev 50b321068d32d404cc9b5d392f0e20d48cabbf2b)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java


 Fix unit test failures in SimulatedFsDataset
 

 Key: HDFS-7108
 URL: https://issues.apache.org/jira/browse/HDFS-7108
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-7108.01.patch


 HDFS-7100 introduced a few unit test failures due to 
 UnsupportedOperationException in {{SimulatedFsDataset.getVolume}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6927) Add unit tests

2014-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14156527#comment-14156527
 ] 

Hudson commented on HDFS-6927:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/])
HDFS-6927. Initial unit tests for Lazy Persist files. (Arpit Agarwal) 
(aagarwal: rev 3f64c4aaf00d92659ae992bfe7fe8403b4013ae6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestLazyPersistFiles.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-6581.txt


 Add unit tests
 --

 Key: HDFS-6927
 URL: https://issues.apache.org/jira/browse/HDFS-6927
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-6581
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-6927.01.patch


 Add a bunch of unit tests to cover flag persistence, propagation to DN, 
 ability to write replicas to RAM disk, lazy writes to disk and eviction from 
 RAM disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >