[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171036#comment-14171036
 ] 

Hudson commented on HDFS-7090:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1926 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1926/])
HDFS-7090. Use unbuffered writes when persisting in-memory replicas. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
1770bb942f9ebea38b6811ba0bc3cc249ef3ccbb)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/nativeio/TestNativeIO.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/errno_enum.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/Errno.java


> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch, HDFS-7090.2.patch, 
> HDFS-7090.3.patch, HDFS-7090.4.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170956#comment-14170956
 ] 

Hudson commented on HDFS-7090:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1901 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1901/])
HDFS-7090. Use unbuffered writes when persisting in-memory replicas. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
1770bb942f9ebea38b6811ba0bc3cc249ef3ccbb)
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/Errno.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/errno_enum.c
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/nativeio/TestNativeIO.java


> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch, HDFS-7090.2.patch, 
> HDFS-7090.3.patch, HDFS-7090.4.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170810#comment-14170810
 ] 

Hudson commented on HDFS-7090:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #711 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/711/])
HDFS-7090. Use unbuffered writes when persisting in-memory replicas. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
1770bb942f9ebea38b6811ba0bc3cc249ef3ccbb)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/nativeio/TestNativeIO.java
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/errno_enum.c
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/Errno.java
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch, HDFS-7090.2.patch, 
> HDFS-7090.3.patch, HDFS-7090.4.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169657#comment-14169657
 ] 

Hudson commented on HDFS-7090:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6251 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6251/])
HDFS-7090. Use unbuffered writes when persisting in-memory replicas. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
1770bb942f9ebea38b6811ba0bc3cc249ef3ccbb)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/Errno.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/nativeio/TestNativeIO.java
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/errno_enum.c


> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch, HDFS-7090.2.patch, 
> HDFS-7090.3.patch, HDFS-7090.4.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14167961#comment-14167961
 ] 

Hadoop QA commented on HDFS-7090:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12674299/HDFS-7090.4.patch
  against trunk revision f4b7e99.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
  
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8399//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8399//artifact/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8399//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8399//console

This message is automatically generated.

> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch, HDFS-7090.2.patch, 
> HDFS-7090.3.patch, HDFS-7090.4.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14167941#comment-14167941
 ] 

Hadoop QA commented on HDFS-7090:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12674290/HDFS-7090.3.patch
  against trunk revision f4b7e99.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
  org.apache.hadoop.hdfs.server.balancer.TestBalancer
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot

  The following test timeouts occurred in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.TestDatanodeBlockScanner

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8398//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8398//artifact/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8398//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8398//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8398//console

This message is automatically generated.

> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch, HDFS-7090.2.patch, 
> HDFS-7090.3.patch, HDFS-7090.4.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14167781#comment-14167781
 ] 

Chris Nauroth commented on HDFS-7090:
-

Thanks for the additional update.  +1 for patch v4 pending Jenkins.

> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch, HDFS-7090.2.patch, 
> HDFS-7090.3.patch, HDFS-7090.4.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14167740#comment-14167740
 ] 

Chris Nauroth commented on HDFS-7090:
-

It looks like we're still going to get a findbugs warning for ignoring the 
return value of {{destFile.setLastModified(srcFile.lastModified());}}.  If you 
upload one more patch fixing that, then I'll be +1, pending Jenkins run.  
Thanks again, Xiaoyu!

> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch, HDFS-7090.2.patch, 
> HDFS-7090.3.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14167695#comment-14167695
 ] 

Chris Nauroth commented on HDFS-7090:
-

The release audit warning is unrelated.

The hadoop-common findbugs warnings are unrelated, but we need to address the 
findbugs warnings in hadoop-hdfs.  It turns out this is echoing one of my past 
comments about checking the return code of some of the {{File}} methods.

The "build failed in hadoop-hdfs" looks bogus to me.  It couldn't find the new 
{{NativeIO}} method.  I've seen this happen sometimes on pre-commit builds for 
patches that span hadoop-common and another module.  It's probably worth just 
waiting for the next run on the next patch revision.

{code}
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build@2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java:[1064,14]
 cannot find symbol
[ERROR] symbol  : method copyFileUnbuffered(java.io.File,java.io.File)
[ERROR] location: class org.apache.hadoop.io.nativeio.NativeIO
{code}


> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch, HDFS-7090.2.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14167681#comment-14167681
 ] 

Hadoop QA commented on HDFS-7090:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12674264/HDFS-7090.2.patch
  against trunk revision d3d3d47.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.ha.TestZKFailoverControllerStress

  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8396//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8396//artifact/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8396//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8396//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8396//console

This message is automatically generated.

> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch, HDFS-7090.2.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14167678#comment-14167678
 ] 

Chris Nauroth commented on HDFS-7090:
-

Thanks again for incorporating the feedback.  I verified that patch v2 builds 
fine on Mac.  I think there are just 2 more minor things, plus addressing 
anything that comes back from Jenkins, and then this is done.
# {{Storage#nativeCopyFileUnbuffered}}: The line {{destFile.delete()}} could 
fail and return {{false}}.  Shall we check for this and throw an exception?
# Really minor nitpick: In the test, you can remove the null checks on 
{{channel}} and {{raSrcFile}}.  The null checks are already done inside 
{{IOUtils#cleanup}}.

> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch, HDFS-7090.2.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14167230#comment-14167230
 ] 

Chris Nauroth commented on HDFS-7090:
-

bq. This is to create a local TEST_DIR declared in testCopyFileUnbuffered(), I 
change it to a different name to avoid the confusion with the class member 
variable TEST_DIR.

Oh, now I see it.  Thanks for clarifying.  Using a different variable name 
definitely would help.  Also, please use {{assertTrue}} instead of 
{{assumeTrue}} here.  {{assumeTrue}} would cause JUnit to ignore the test if 
the mkdir failed.  We probably want to see a test failure if we can't create 
the directory.

> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-10 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14167150#comment-14167150
 ] 

Xiaoyu Yao commented on HDFS-7090:
--

Thanks Chris for the review. I will address them in the next patch. 


Good catch! I will fix it in the next patch.
Will remove the cross project javadoc links.
This is to create a local TEST_DIR declared in testCopyFileUnbuffered(), I 
change it to a different name to avoid the confusion with the class member 
variable TEST_DIR. 

# Instead of catching exceptions and calling {{fail}}, we can just let the 
exception be thrown.  We'll actually get more debugging information this way if 
a test fails, because JUnit will print the full stack trace.
Good point. I will fix it in the next patch.

Fixed.

The link is broken but I will try to repro it on my machine.


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14167052#comment-14167052
 ] 

Chris Nauroth commented on HDFS-7090:
-

Just a couple more things:
# In {{Storage}}, please use {{org.apache.hadoop.fs.FileUtil#canWrite}} instead 
of {{java.io.File#canWrite}}.  The various "canX" permission checking methods 
in {{File}} are known to be buggy on Windows.  We work around this with 
corresponding methods in {{FileUtil}} that delegate to native code on Windows.
# Is it valid to capitalize the 'L' in {{@Link}}?  I've only ever seen 
lower-case 'l', so I don't know if this works.

> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14167030#comment-14167030
 ] 

Chris Nauroth commented on HDFS-7090:
-

Thanks for incorporating the feedback, Xiaoyu.  Here are a few additional 
comments:
# The current patch still breaks native compilation on BSD-based systems that 
are not FreeBSD.  Specifically, I'm thinking of OSX.  {{__FreeBSD__}} won't be 
defined for the ifdef in that case.
# In the JavaDocs, I don't think we can link to {{Storage}}, because it's in 
hadoop-hdfs.  hadoop-common doesn't have a dependency on hadoop-hdfs, so the 
javadoc tool wouldn't be able to find the method.  I'm not sure why this wasn't 
flagged as a javadoc warning.
# The test still has an unneeded {{mkdir}}.  I think this line can be deleted: 
{{assumeTrue(TEST_DIR.mkdir())}}.
# Instead of catching exceptions and calling {{fail}}, we can just let the 
exception be thrown.  We'll actually get more debugging information this way if 
a test fails, because JUnit will print the full stack trace.
# I recommend using {{IOUtils#cleanup}} to close {{channel}} and {{raSrcFile}}. 
 If an exception is thrown in the try block, but then a {{close}} also fails in 
the finally block, then the second exception will mask the first.  It's 
probably going to be more helpful for us to see the exception from the try 
block.
# Unfortunately, it looks like the build artifacts from the pre-commit job are 
gone, so I can't review the -1 from Jenkins.  I expect the release audit 
warning is unrelated.  Let's investigate findbugs and tests.

> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14166577#comment-14166577
 ] 

Hadoop QA commented on HDFS-7090:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12674130/HDFS-7090.1.patch
  against trunk revision cb81bac.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8389//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8389//artifact/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8389//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8389//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8389//console

This message is automatically generated.

> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch, HDFS-7090.1.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-09 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14165716#comment-14165716
 ] 

Chris Nauroth commented on HDFS-7090:
-

Hi, Xiaoyu.  The patch looks good.  In addition to investigating the test 
failures, here are a few comments:
# I noticed that {{fstat}} can result in errno {{EOVERFLOW}} according to the 
man page.  Can you please add a mapping for this to errno_enum.c?  This 
probably will never happen in practice, but just in case, it would be nice to 
get a clear diagnostic.
# I don't think the test needs to do {{TEST_DIR.mkdirs()}}.  This is already 
done in the {{Before}} method.
# Also in the test, let's write some bytes into the file before copying it.  
Otherwise, I'm not sure if it's fully exercising the change.


> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7090) Use unbuffered writes when persisting in-memory replicas

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14165687#comment-14165687
 ] 

Hadoop QA commented on HDFS-7090:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12673839/HDFS-7090.0.patch
  against trunk revision db71bb5.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
  org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
  
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
  org.apache.hadoop.hdfs.server.balancer.TestBalancer
  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles
  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8381//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8381//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8381//console

This message is automatically generated.

> Use unbuffered writes when persisting in-memory replicas
> 
>
> Key: HDFS-7090
> URL: https://issues.apache.org/jira/browse/HDFS-7090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0
>
> Attachments: HDFS-7090.0.patch
>
>
> The LazyWriter thread just uses {{FileUtils.copyFile}} to copy block files to 
> persistent storage. It would be better to use unbuffered writes to avoid 
> churning page cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)