[jira] [Updated] (HDFS-4001) TestSafeMode#testInitializeReplQueuesEarly may time out

2012-10-02 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-4001:
--

Attachment: timeout.txt.gz

Full test log attached.

> TestSafeMode#testInitializeReplQueuesEarly may time out
> ---
>
> Key: HDFS-4001
> URL: https://issues.apache.org/jira/browse/HDFS-4001
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
> Attachments: timeout.txt.gz
>
>
> Saw this failure on a recent branch-2 jenkins run, has also been seen on 
> trunk.
> {noformat}
> java.util.concurrent.TimeoutException: Timed out waiting for condition
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:107)
>   at 
> org.apache.hadoop.hdfs.TestSafeMode.testInitializeReplQueuesEarly(TestSafeMode.java:191)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4001) TestSafeMode#testInitializeReplQueuesEarly may time out

2012-10-02 Thread Eli Collins (JIRA)
Eli Collins created HDFS-4001:
-

 Summary: TestSafeMode#testInitializeReplQueuesEarly may time out
 Key: HDFS-4001
 URL: https://issues.apache.org/jira/browse/HDFS-4001
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins


Saw this failure on a recent branch-2 jenkins run, has also been seen on trunk.

{noformat}
java.util.concurrent.TimeoutException: Timed out waiting for condition
at 
org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:107)
at 
org.apache.hadoop.hdfs.TestSafeMode.testInitializeReplQueuesEarly(TestSafeMode.java:191)
{noformat}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3753) Tests don't run with native libraries

2012-10-02 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468340#comment-13468340
 ] 

Eli Collins commented on HDFS-3753:
---

Looks like TestNativeCodeLoader is failing in some other runs, eg HDFS-3995.

> Tests don't run with native libraries
> -
>
> Key: HDFS-3753
> URL: https://issues.apache.org/jira/browse/HDFS-3753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Colin Patrick McCabe
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-3753.001.patch, HDFS-3753.002.patch
>
>
> Test execution when run with the native flag and native libraries have been 
> built don't actually use the native libs because NativeCodeLoader is unable 
> to load native-hadoop. Eg run {{mvn compile -Pnative}} then {{mvn 
> -Dtest=TestSeekBug test -Pnative}} and check the test output. This is because 
> the test's java.library.path is looking for the lib in hdfs (
> hadoop-hdfs-project/hadoop-hdfs/target/native/target/usr/local/lib) however 
> the native lib lives in common. I confirmed copying the lib to the 
> appropriate directory fixes things. We need to update the java.library.path 
> for test execution to include the common lib dir.  This may be an issue with 
> MR as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3652) 1.x: FSEditLog failure removes the wrong edit stream when storage dirs have same name

2012-10-02 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HDFS-3652:
-

Target Version/s: 1.0.4  (was: 1.0.4, 1.1.0)

> 1.x: FSEditLog failure removes the wrong edit stream when storage dirs have 
> same name
> -
>
> Key: HDFS-3652
> URL: https://issues.apache.org/jira/browse/HDFS-3652
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.3, 1.1.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 1.0.4
>
> Attachments: hdfs-3652.txt
>
>
> In {{FSEditLog.removeEditsForStorageDir}}, we iterate over the edits streams 
> trying to find the stream corresponding to a given dir. To check equality, we 
> currently use the following condition:
> {code}
>   File parentDir = getStorageDirForStream(idx);
>   if (parentDir.getName().equals(sd.getRoot().getName())) {
> {code}
> ... which is horribly incorrect. If two or more storage dirs happen to have 
> the same terminal path component (eg /data/1/nn and /data/2/nn) then it will 
> pick the wrong stream(s) to remove.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3652) 1.x: FSEditLog failure removes the wrong edit stream when storage dirs have same name

2012-10-02 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HDFS-3652:
-

Fix Version/s: (was: 1.1.0)

> 1.x: FSEditLog failure removes the wrong edit stream when storage dirs have 
> same name
> -
>
> Key: HDFS-3652
> URL: https://issues.apache.org/jira/browse/HDFS-3652
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.3, 1.1.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 1.0.4
>
> Attachments: hdfs-3652.txt
>
>
> In {{FSEditLog.removeEditsForStorageDir}}, we iterate over the edits streams 
> trying to find the stream corresponding to a given dir. To check equality, we 
> currently use the following condition:
> {code}
>   File parentDir = getStorageDirForStream(idx);
>   if (parentDir.getName().equals(sd.getRoot().getName())) {
> {code}
> ... which is horribly incorrect. If two or more storage dirs happen to have 
> the same terminal path component (eg /data/1/nn and /data/2/nn) then it will 
> pick the wrong stream(s) to remove.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4000) TestParallelLocalRead fails with "input ByteBuffers must be direct buffers"

2012-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468315#comment-13468315
 ] 

Hadoop QA commented on HDFS-4000:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12547479/HDFS-4000.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestHdfsNativeCodeLoader

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3257//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3257//console

This message is automatically generated.

> TestParallelLocalRead fails with "input ByteBuffers must be direct buffers"
> ---
>
> Key: HDFS-4000
> URL: https://issues.apache.org/jira/browse/HDFS-4000
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.3-alpha
>Reporter: Eli Collins
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4000.001.patch
>
>
> I think this may be related to HDFS-3753, passes when I revert it. Here's 
> failure. Looks like it needs the same fix as TestShortCircuitLocalRead, not 
> sure why that didn't show up in the jenkins run.
> {noformat}
> java.lang.AssertionError: Check log for errors
>   at org.junit.Assert.fail(Assert.java:91)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil.runTestWorkload(TestParallelReadUtil.java:373)
>   at 
> org.apache.hadoop.hdfs.TestParallelLocalRead.testParallelReadByteBuffer(TestParallelLocalRead.java:61)
> {noformat}
> {noformat}
> 2012-10-02 15:39:49,481 ERROR hdfs.TestParallelReadUtil 
> (TestParallelReadUtil.java:run(227)) - ReadWorker-1-/TestParallelRead.dat.0: 
> Error while testing read at 199510 length 14773
> java.lang.IllegalArgumentException: input ByteBuffers must be direct buffers
>   at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native 
> Method)
>   at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
>   at 
> org.apache.hadoop.hdfs.BlockReaderLocal.doByteBufferRead(BlockReaderLocal.java:501)
>   at 
> org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:409)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream$ByteBufferStrategy.doRead(DFSInputStream.java:561)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:594)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:648)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:696)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil$DirectReadWorkerHelper.read(TestParallelReadUtil.java:91)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil$DirectReadWorkerHelper.pRead(TestParallelReadUtil.java:104)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil$ReadWorker.pRead(TestParallelReadUtil.java:275)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil$ReadWorker.run(TestParallelReadUtil.java:223)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2946) HA: Put a cap on the number of completed edits files retained by the NN

2012-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468272#comment-13468272
 ] 

Hadoop QA commented on HDFS-2946:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12547474/HDFS-2946.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestHdfsNativeCodeLoader
  org.apache.hadoop.hdfs.TestHDFSFileSystemContract

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3256//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3256//console

This message is automatically generated.

> HA: Put a cap on the number of completed edits files retained by the NN
> ---
>
> Key: HDFS-2946
> URL: https://issues.apache.org/jira/browse/HDFS-2946
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, name-node
>Affects Versions: 2.0.1-alpha
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-2946.patch
>
>
> HDFS-2794 added a minimum number of transactions to retain in edits files. 
> Since many underlying file systems put a cap on the number of entries in a 
> single directory, we should put a cap on the number of edits files which will 
> be retained by the NN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2127) Add a test that ensure AccessControlExceptions contain a full path

2012-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468263#comment-13468263
 ] 

Hadoop QA commented on HDFS-2127:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12547472/HDFS-2127.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestHdfsNativeCodeLoader

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3255//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3255//console

This message is automatically generated.

> Add a test that ensure AccessControlExceptions contain a full path
> --
>
> Key: HDFS-2127
> URL: https://issues.apache.org/jira/browse/HDFS-2127
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: name-node
>Reporter: Eli Collins
>Assignee: Stephen Chu
>  Labels: newbie
> Attachments: HDFS-2127.patch
>
>
> HDFS-1628 added full paths to AccessControlExceptions, we should have a test 
> that covers the cases that were done manually in [this 
> comment|https://issues.apache.org/jira/browse/HDFS-1628?focusedCommentId=12996135&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996135].

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4000) TestParallelLocalRead fails with "input ByteBuffers must be direct buffers"

2012-10-02 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4000:
---

Attachment: HDFS-4000.001.patch

> TestParallelLocalRead fails with "input ByteBuffers must be direct buffers"
> ---
>
> Key: HDFS-4000
> URL: https://issues.apache.org/jira/browse/HDFS-4000
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.3-alpha
>Reporter: Eli Collins
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4000.001.patch
>
>
> I think this may be related to HDFS-3753, passes when I revert it. Here's 
> failure. Looks like it needs the same fix as TestShortCircuitLocalRead, not 
> sure why that didn't show up in the jenkins run.
> {noformat}
> java.lang.AssertionError: Check log for errors
>   at org.junit.Assert.fail(Assert.java:91)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil.runTestWorkload(TestParallelReadUtil.java:373)
>   at 
> org.apache.hadoop.hdfs.TestParallelLocalRead.testParallelReadByteBuffer(TestParallelLocalRead.java:61)
> {noformat}
> {noformat}
> 2012-10-02 15:39:49,481 ERROR hdfs.TestParallelReadUtil 
> (TestParallelReadUtil.java:run(227)) - ReadWorker-1-/TestParallelRead.dat.0: 
> Error while testing read at 199510 length 14773
> java.lang.IllegalArgumentException: input ByteBuffers must be direct buffers
>   at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native 
> Method)
>   at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
>   at 
> org.apache.hadoop.hdfs.BlockReaderLocal.doByteBufferRead(BlockReaderLocal.java:501)
>   at 
> org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:409)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream$ByteBufferStrategy.doRead(DFSInputStream.java:561)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:594)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:648)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:696)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil$DirectReadWorkerHelper.read(TestParallelReadUtil.java:91)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil$DirectReadWorkerHelper.pRead(TestParallelReadUtil.java:104)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil$ReadWorker.pRead(TestParallelReadUtil.java:275)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil$ReadWorker.run(TestParallelReadUtil.java:223)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4000) TestParallelLocalRead fails with "input ByteBuffers must be direct buffers"

2012-10-02 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4000:
---

Status: Patch Available  (was: Open)

> TestParallelLocalRead fails with "input ByteBuffers must be direct buffers"
> ---
>
> Key: HDFS-4000
> URL: https://issues.apache.org/jira/browse/HDFS-4000
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.3-alpha
>Reporter: Eli Collins
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4000.001.patch
>
>
> I think this may be related to HDFS-3753, passes when I revert it. Here's 
> failure. Looks like it needs the same fix as TestShortCircuitLocalRead, not 
> sure why that didn't show up in the jenkins run.
> {noformat}
> java.lang.AssertionError: Check log for errors
>   at org.junit.Assert.fail(Assert.java:91)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil.runTestWorkload(TestParallelReadUtil.java:373)
>   at 
> org.apache.hadoop.hdfs.TestParallelLocalRead.testParallelReadByteBuffer(TestParallelLocalRead.java:61)
> {noformat}
> {noformat}
> 2012-10-02 15:39:49,481 ERROR hdfs.TestParallelReadUtil 
> (TestParallelReadUtil.java:run(227)) - ReadWorker-1-/TestParallelRead.dat.0: 
> Error while testing read at 199510 length 14773
> java.lang.IllegalArgumentException: input ByteBuffers must be direct buffers
>   at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native 
> Method)
>   at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
>   at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
>   at 
> org.apache.hadoop.hdfs.BlockReaderLocal.doByteBufferRead(BlockReaderLocal.java:501)
>   at 
> org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:409)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream$ByteBufferStrategy.doRead(DFSInputStream.java:561)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:594)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:648)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:696)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil$DirectReadWorkerHelper.read(TestParallelReadUtil.java:91)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil$DirectReadWorkerHelper.pRead(TestParallelReadUtil.java:104)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil$ReadWorker.pRead(TestParallelReadUtil.java:275)
>   at 
> org.apache.hadoop.hdfs.TestParallelReadUtil$ReadWorker.run(TestParallelReadUtil.java:223)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2946) HA: Put a cap on the number of completed edits files retained by the NN

2012-10-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-2946:
-

Attachment: HDFS-2946.patch

Here's a patch which addresses the issue. This adds a new configuration option 
called "dfs.namenode.max.extra.edits.segments.retained" which will cap the 
number of edit log segments the NN retains, despite whatever is set as the 
value of "dfs.namenode.num.extra.edits.retained".

This changed required the NNStorageRetentionManager be able to enumerate the 
edit log segments available to it, so the bulk of the patch is plumbing that 
around.

> HA: Put a cap on the number of completed edits files retained by the NN
> ---
>
> Key: HDFS-2946
> URL: https://issues.apache.org/jira/browse/HDFS-2946
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, name-node
>Affects Versions: 2.0.1-alpha
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-2946.patch
>
>
> HDFS-2794 added a minimum number of transactions to retain in edits files. 
> Since many underlying file systems put a cap on the number of entries in a 
> single directory, we should put a cap on the number of edits files which will 
> be retained by the NN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2946) HA: Put a cap on the number of completed edits files retained by the NN

2012-10-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-2946:
-

Status: Patch Available  (was: Open)

> HA: Put a cap on the number of completed edits files retained by the NN
> ---
>
> Key: HDFS-2946
> URL: https://issues.apache.org/jira/browse/HDFS-2946
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, name-node
>Affects Versions: 2.0.1-alpha
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-2946.patch
>
>
> HDFS-2794 added a minimum number of transactions to retain in edits files. 
> Since many underlying file systems put a cap on the number of entries in a 
> single directory, we should put a cap on the number of edits files which will 
> be retained by the NN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2946) HA: Put a cap on the number of completed edits files retained by the NN

2012-10-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-2946:
-

 Target Version/s: 2.0.3-alpha  (was: 0.24.0)
Affects Version/s: (was: 0.24.0)
   2.0.1-alpha

> HA: Put a cap on the number of completed edits files retained by the NN
> ---
>
> Key: HDFS-2946
> URL: https://issues.apache.org/jira/browse/HDFS-2946
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, name-node
>Affects Versions: 2.0.1-alpha
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-2946.patch
>
>
> HDFS-2794 added a minimum number of transactions to retain in edits files. 
> Since many underlying file systems put a cap on the number of entries in a 
> single directory, we should put a cap on the number of edits files which will 
> be retained by the NN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2127) Add a test that ensure AccessControlExceptions contain a full path

2012-10-02 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu updated HDFS-2127:
--

Status: Patch Available  (was: Open)

> Add a test that ensure AccessControlExceptions contain a full path
> --
>
> Key: HDFS-2127
> URL: https://issues.apache.org/jira/browse/HDFS-2127
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: name-node
>Reporter: Eli Collins
>Assignee: Stephen Chu
>  Labels: newbie
> Attachments: HDFS-2127.patch
>
>
> HDFS-1628 added full paths to AccessControlExceptions, we should have a test 
> that covers the cases that were done manually in [this 
> comment|https://issues.apache.org/jira/browse/HDFS-1628?focusedCommentId=12996135&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996135].

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2127) Add a test that ensure AccessControlExceptions contain a full path

2012-10-02 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu updated HDFS-2127:
--

Attachment: HDFS-2127.patch

Attached a patch that modifies TestPermission.java

Renamed "testFilePermision()" to "testFilePermission()."

In testFilePermission(), we invoke an AccessControlException when we call 
canMkdirs(), canCreate(), and canOpen(). I modified canMkdirs(), canCreate(), 
canOpen(), and canRename() to check that the AccessControlExceptions contain 
the absolute paths.

> Add a test that ensure AccessControlExceptions contain a full path
> --
>
> Key: HDFS-2127
> URL: https://issues.apache.org/jira/browse/HDFS-2127
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: name-node
>Reporter: Eli Collins
>Assignee: Stephen Chu
>  Labels: newbie
> Attachments: HDFS-2127.patch
>
>
> HDFS-1628 added full paths to AccessControlExceptions, we should have a test 
> that covers the cases that were done manually in [this 
> comment|https://issues.apache.org/jira/browse/HDFS-1628?focusedCommentId=12996135&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12996135].

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3995) Use DFSTestUtil.createFile() for file creation and writing in test cases

2012-10-02 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468217#comment-13468217
 ] 

Jing Zhao commented on HDFS-3995:
-

The failed test cases seem unrelated: 
TestPersistBlocks#TestRestartDfsWithFlush -- HDFS-3811
TestNameNodeMetrics#testCorruptBlock -- HDFS-2434
TestBPOfferService#testBasicFunctionality -- HDFS-3930
TestHdfsNativeCodeLoader#testNativeCodeLoaded -- HDFS-3753



> Use DFSTestUtil.createFile() for file creation and writing in test cases
> 
>
> Key: HDFS-3995
> URL: https://issues.apache.org/jira/browse/HDFS-3995
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-3995.trunk.001.patch
>
>
> Currently there are many tests that define and use their own methods to 
> create file and write some number of blocks in MiniDfsCluster. These methods 
> can be consolidated to DFSTestUtil.createFile().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3912) Detecting and avoiding stale datanodes for writing

2012-10-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468207#comment-13468207
 ] 

Devaraj Das commented on HDFS-3912:
---

bq. I had issues trying branch 1.1 on HBase 0.96. Some (hbase) unit tests were 
not working with this branch. I was lacking time to understand why, but I will 
have a look again later (hopefully it will get fixed by just waiting...)

Hey Nicolas, can you please enumerate the failing tests?

> Detecting and avoiding stale datanodes for writing
> --
>
> Key: HDFS-3912
> URL: https://issues.apache.org/jira/browse/HDFS-3912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: nkeywal
> Attachments: HDFS-3912.001.patch, HDFS-3912.002.patch, 
> HDFS-3912.003.patch, HDFS-3912.004.patch, HDFS-3912.005.patch
>
>
> 1. Make stale timeout adaptive to the number of nodes marked stale in the 
> cluster.
> 2. Consider having a separate configuration for write skipping the stale 
> nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3995) Use DFSTestUtil.createFile() for file creation and writing in test cases

2012-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468206#comment-13468206
 ] 

Hadoop QA commented on HDFS-3995:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12547315/HDFS-3995.trunk.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 18 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestHdfsNativeCodeLoader
  
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
  org.apache.hadoop.hdfs.server.datanode.TestBPOfferService
  org.apache.hadoop.hdfs.TestPersistBlocks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3253//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3253//console

This message is automatically generated.

> Use DFSTestUtil.createFile() for file creation and writing in test cases
> 
>
> Key: HDFS-3995
> URL: https://issues.apache.org/jira/browse/HDFS-3995
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-3995.trunk.001.patch
>
>
> Currently there are many tests that define and use their own methods to 
> create file and write some number of blocks in MiniDfsCluster. These methods 
> can be consolidated to DFSTestUtil.createFile().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3998) Speed up fsck

2012-10-02 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468200#comment-13468200
 ] 

Todd Lipcon commented on HDFS-3998:
---

Isn't this basically equivalent to the listcorruptfiles.jsp that we've got?

> Speed up fsck
> -
>
> Key: HDFS-3998
> URL: https://issues.apache.org/jira/browse/HDFS-3998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Ming Ma
>
> We have some big clusters. Sometimes we want to find out the list of missing 
> blocks or blocks with only one replica quickly. Currently fsck has to take a 
> path as input and it then recursively check for inconsistency. That could 
> take a long time to find the missing blocks and the files the missing blocks 
> belong to. It will be useful to speed this up. For example, it could go 
> directly to missing blocks stored in NN and do the file lookup instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3996) Add debug log removed in HDFS-3873 back

2012-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468164#comment-13468164
 ] 

Hadoop QA commented on HDFS-3996:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12547444/hdfs-3996.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestHdfsNativeCodeLoader

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3254//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3254//console

This message is automatically generated.

> Add debug log removed in HDFS-3873 back
> ---
>
> Key: HDFS-3996
> URL: https://issues.apache.org/jira/browse/HDFS-3996
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Minor
> Attachments: hdfs-3996.txt
>
>
> Per HDFS-3873 let's add the debug log back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4000) TestParallelLocalRead fails with "input ByteBuffers must be direct buffers"

2012-10-02 Thread Eli Collins (JIRA)
Eli Collins created HDFS-4000:
-

 Summary: TestParallelLocalRead fails with "input ByteBuffers must 
be direct buffers"
 Key: HDFS-4000
 URL: https://issues.apache.org/jira/browse/HDFS-4000
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe


I think this may be related to HDFS-3753, passes when I revert it. Here's 
failure. Looks like it needs the same fix as TestShortCircuitLocalRead, not 
sure why that didn't show up in the jenkins run.

{noformat}
java.lang.AssertionError: Check log for errors
at org.junit.Assert.fail(Assert.java:91)
at 
org.apache.hadoop.hdfs.TestParallelReadUtil.runTestWorkload(TestParallelReadUtil.java:373)
at 
org.apache.hadoop.hdfs.TestParallelLocalRead.testParallelReadByteBuffer(TestParallelLocalRead.java:61)
{noformat}

{noformat}
2012-10-02 15:39:49,481 ERROR hdfs.TestParallelReadUtil 
(TestParallelReadUtil.java:run(227)) - ReadWorker-1-/TestParallelRead.dat.0: 
Error while testing read at 199510 length 14773
java.lang.IllegalArgumentException: input ByteBuffers must be direct buffers
at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native 
Method)
at 
org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
at 
org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
at 
org.apache.hadoop.hdfs.BlockReaderLocal.doByteBufferRead(BlockReaderLocal.java:501)
at 
org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:409)
at 
org.apache.hadoop.hdfs.DFSInputStream$ByteBufferStrategy.doRead(DFSInputStream.java:561)
at 
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:594)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:648)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:696)
at 
org.apache.hadoop.hdfs.TestParallelReadUtil$DirectReadWorkerHelper.read(TestParallelReadUtil.java:91)
at 
org.apache.hadoop.hdfs.TestParallelReadUtil$DirectReadWorkerHelper.pRead(TestParallelReadUtil.java:104)
at 
org.apache.hadoop.hdfs.TestParallelReadUtil$ReadWorker.pRead(TestParallelReadUtil.java:275)
at 
org.apache.hadoop.hdfs.TestParallelReadUtil$ReadWorker.run(TestParallelReadUtil.java:223)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3912) Detecting and avoiding stale datanodes for writing

2012-10-02 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468131#comment-13468131
 ] 

nkeywal commented on HDFS-3912:
---

@Sureshaaa
I was echoing my message from the 21th: I had issues (not yet analyzed) with 
branch 1.1 on HBase, but I definitively want to try Jing's patch, so I will 
give it another try later.


> Detecting and avoiding stale datanodes for writing
> --
>
> Key: HDFS-3912
> URL: https://issues.apache.org/jira/browse/HDFS-3912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: nkeywal
> Attachments: HDFS-3912.001.patch, HDFS-3912.002.patch, 
> HDFS-3912.003.patch, HDFS-3912.004.patch, HDFS-3912.005.patch
>
>
> 1. Make stale timeout adaptive to the number of nodes marked stale in the 
> cluster.
> 2. Consider having a separate configuration for write skipping the stale 
> nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3912) Detecting and avoiding stale datanodes for writing

2012-10-02 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-3912:


Attachment: HDFS-3912.005.patch

Thanks for the comments Suresh! I've addressed most of the comments. I will 
create separate jiras for DatanodeStatics and metrics issues as well.

> Detecting and avoiding stale datanodes for writing
> --
>
> Key: HDFS-3912
> URL: https://issues.apache.org/jira/browse/HDFS-3912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: nkeywal
> Attachments: HDFS-3912.001.patch, HDFS-3912.002.patch, 
> HDFS-3912.003.patch, HDFS-3912.004.patch, HDFS-3912.005.patch
>
>
> 1. Make stale timeout adaptive to the number of nodes marked stale in the 
> cluster.
> 2. Consider having a separate configuration for write skipping the stale 
> nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3997) FSImage Parsing in Rumen reports "IS_COMPRESSED" values incorrectly.

2012-10-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468118#comment-13468118
 ] 

Hadoop QA commented on HDFS-3997:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12547437/MAPREDUCE-4701.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestHdfsNativeCodeLoader

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3252//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3252//console

This message is automatically generated.

> FSImage Parsing in Rumen reports "IS_COMPRESSED" values incorrectly.
> 
>
> Key: HDFS-3997
> URL: https://issues.apache.org/jira/browse/HDFS-3997
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.3
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
>Priority: Trivial
> Attachments: MAPREDUCE-4701.patch
>
>
> Rumen's processing of FSImage logs reports the value of "IS_COMPRESSED" 
> incorrectly as "-39" (or whatever the image-version is).
> The problem is in ImageLoaderCurrent, where the FSIMAGE_COMPRESSION node is 
> visited using the imageVersion value instead of the value of isCompressed.) A 
> fix is forthcoming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-3999) HttpFS OPEN operation expects len parameter, it should be length

2012-10-02 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HDFS-3999:


 Summary: HttpFS OPEN operation expects len parameter, it should be 
length
 Key: HDFS-3999
 URL: https://issues.apache.org/jira/browse/HDFS-3999
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.3-alpha


WebHDFS API defines *length* as the parameter for partial length for OPEN 
operations, HttpFS is using *len*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-3998) Speed up fsck

2012-10-02 Thread Ming Ma (JIRA)
Ming Ma created HDFS-3998:
-

 Summary: Speed up fsck
 Key: HDFS-3998
 URL: https://issues.apache.org/jira/browse/HDFS-3998
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Ming Ma


We have some big clusters. Sometimes we want to find out the list of missing 
blocks or blocks with only one replica quickly. Currently fsck has to take a 
path as input and it then recursively check for inconsistency. That could take 
a long time to find the missing blocks and the files the missing blocks belong 
to. It will be useful to speed this up. For example, it could go directly to 
missing blocks stored in NN and do the file lookup instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3996) Add debug log removed in HDFS-3873 back

2012-10-02 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3996:
--

Target Version/s: 2.0.3-alpha
  Status: Patch Available  (was: Open)

> Add debug log removed in HDFS-3873 back
> ---
>
> Key: HDFS-3996
> URL: https://issues.apache.org/jira/browse/HDFS-3996
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Minor
> Attachments: hdfs-3996.txt
>
>
> Per HDFS-3873 let's add the debug log back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3996) Add debug log removed in HDFS-3873 back

2012-10-02 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3996:
--

Attachment: hdfs-3996.txt

Patch attached.

> Add debug log removed in HDFS-3873 back
> ---
>
> Key: HDFS-3996
> URL: https://issues.apache.org/jira/browse/HDFS-3996
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Minor
> Attachments: hdfs-3996.txt
>
>
> Per HDFS-3873 let's add the debug log back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3995) Use DFSTestUtil.createFile() for file creation and writing in test cases

2012-10-02 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-3995:
--

Status: Patch Available  (was: Open)

> Use DFSTestUtil.createFile() for file creation and writing in test cases
> 
>
> Key: HDFS-3995
> URL: https://issues.apache.org/jira/browse/HDFS-3995
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-3995.trunk.001.patch
>
>
> Currently there are many tests that define and use their own methods to 
> create file and write some number of blocks in MiniDfsCluster. These methods 
> can be consolidated to DFSTestUtil.createFile().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3753) Tests don't run with native libraries

2012-10-02 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3753:
--

  Resolution: Fixed
   Fix Version/s: 2.0.3-alpha
Target Version/s:   (was: 2.0.2-alpha)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've committed this and merged to branch-2. Thanks Colin!

> Tests don't run with native libraries
> -
>
> Key: HDFS-3753
> URL: https://issues.apache.org/jira/browse/HDFS-3753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Colin Patrick McCabe
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-3753.001.patch, HDFS-3753.002.patch
>
>
> Test execution when run with the native flag and native libraries have been 
> built don't actually use the native libs because NativeCodeLoader is unable 
> to load native-hadoop. Eg run {{mvn compile -Pnative}} then {{mvn 
> -Dtest=TestSeekBug test -Pnative}} and check the test output. This is because 
> the test's java.library.path is looking for the lib in hdfs (
> hadoop-hdfs-project/hadoop-hdfs/target/native/target/usr/local/lib) however 
> the native lib lives in common. I confirmed copying the lib to the 
> appropriate directory fixes things. We need to update the java.library.path 
> for test execution to include the common lib dir.  This may be an issue with 
> MR as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3912) Detecting and avoiding stale datanodes for writing

2012-10-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468045#comment-13468045
 ] 

Suresh Srinivas commented on HDFS-3912:
---

# Remove HeartbeatManager#checkStaleNodes and use 
DatanodeManager#checkStaleNodes instead
# What happens when ratio is configured invalid?
# when calculating the ration in HeatbeatManager, you are accessing 
datanodes.size() outside synchronization block.
# Can we introduce a method in FSClusterStats to provide the cluster state of 
whether it is avoiding writes to stale nodes and avoid having to add 
DatanodeManager into BlockPlacementPolicy. This way, customer placemet policy 
implementations are not affected.
# I think we should create a separte jira to move some relevant methods such as 
getLiveNodes, stale nodes etc into DatanodeStatics interface.
# We should also add metrics related to stale datanodes.


> Detecting and avoiding stale datanodes for writing
> --
>
> Key: HDFS-3912
> URL: https://issues.apache.org/jira/browse/HDFS-3912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: nkeywal
> Attachments: HDFS-3912.001.patch, HDFS-3912.002.patch, 
> HDFS-3912.003.patch, HDFS-3912.004.patch
>
>
> 1. Make stale timeout adaptive to the number of nodes marked stale in the 
> cluster.
> 2. Consider having a separate configuration for write skipping the stale 
> nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3997) FSImage Parsing in Rumen reports "IS_COMPRESSED" values incorrectly.

2012-10-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3997:
-

Target Version/s: 2.0.3-alpha

> FSImage Parsing in Rumen reports "IS_COMPRESSED" values incorrectly.
> 
>
> Key: HDFS-3997
> URL: https://issues.apache.org/jira/browse/HDFS-3997
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.3
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
>Priority: Trivial
> Attachments: MAPREDUCE-4701.patch
>
>
> Rumen's processing of FSImage logs reports the value of "IS_COMPRESSED" 
> incorrectly as "-39" (or whatever the image-version is).
> The problem is in ImageLoaderCurrent, where the FSIMAGE_COMPRESSION node is 
> visited using the imageVersion value instead of the value of isCompressed.) A 
> fix is forthcoming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HDFS-3997) FSImage Parsing in Rumen reports "IS_COMPRESSED" values incorrectly.

2012-10-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers moved MAPREDUCE-4701 to HDFS-3997:
-

  Component/s: (was: tools/rumen)
   name-node
Affects Version/s: (was: 0.23.3)
   0.23.3
  Key: HDFS-3997  (was: MAPREDUCE-4701)
  Project: Hadoop HDFS  (was: Hadoop Map/Reduce)

> FSImage Parsing in Rumen reports "IS_COMPRESSED" values incorrectly.
> 
>
> Key: HDFS-3997
> URL: https://issues.apache.org/jira/browse/HDFS-3997
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.3
>Reporter: Mithun Radhakrishnan
>Priority: Trivial
> Attachments: MAPREDUCE-4701.patch
>
>
> Rumen's processing of FSImage logs reports the value of "IS_COMPRESSED" 
> incorrectly as "-39" (or whatever the image-version is).
> The problem is in ImageLoaderCurrent, where the FSIMAGE_COMPRESSION node is 
> visited using the imageVersion value instead of the value of isCompressed.) A 
> fix is forthcoming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-3997) FSImage Parsing in Rumen reports "IS_COMPRESSED" values incorrectly.

2012-10-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers reassigned HDFS-3997:


Assignee: Mithun Radhakrishnan

> FSImage Parsing in Rumen reports "IS_COMPRESSED" values incorrectly.
> 
>
> Key: HDFS-3997
> URL: https://issues.apache.org/jira/browse/HDFS-3997
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.3
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
>Priority: Trivial
> Attachments: MAPREDUCE-4701.patch
>
>
> Rumen's processing of FSImage logs reports the value of "IS_COMPRESSED" 
> incorrectly as "-39" (or whatever the image-version is).
> The problem is in ImageLoaderCurrent, where the FSIMAGE_COMPRESSION node is 
> visited using the imageVersion value instead of the value of isCompressed.) A 
> fix is forthcoming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3753) Tests don't run with native libraries

2012-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468028#comment-13468028
 ] 

Hudson commented on HDFS-3753:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #2825 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2825/])
HDFS-3753. Tests don't run with native libraries. Contributed by Colin 
Patrick McCabe (Revision 1393113)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1393113
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestHdfsNativeCodeLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestShortCircuitLocalRead.java
* /hadoop/common/trunk/hadoop-project/pom.xml


> Tests don't run with native libraries
> -
>
> Key: HDFS-3753
> URL: https://issues.apache.org/jira/browse/HDFS-3753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-3753.001.patch, HDFS-3753.002.patch
>
>
> Test execution when run with the native flag and native libraries have been 
> built don't actually use the native libs because NativeCodeLoader is unable 
> to load native-hadoop. Eg run {{mvn compile -Pnative}} then {{mvn 
> -Dtest=TestSeekBug test -Pnative}} and check the test output. This is because 
> the test's java.library.path is looking for the lib in hdfs (
> hadoop-hdfs-project/hadoop-hdfs/target/native/target/usr/local/lib) however 
> the native lib lives in common. I confirmed copying the lib to the 
> appropriate directory fixes things. We need to update the java.library.path 
> for test execution to include the common lib dir.  This may be an issue with 
> MR as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3753) Tests don't run with native libraries

2012-10-02 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468022#comment-13468022
 ] 

Todd Lipcon commented on HDFS-3753:
---

Mark this as resolved? Or is it awaiting commit to more branches?

> Tests don't run with native libraries
> -
>
> Key: HDFS-3753
> URL: https://issues.apache.org/jira/browse/HDFS-3753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-3753.001.patch, HDFS-3753.002.patch
>
>
> Test execution when run with the native flag and native libraries have been 
> built don't actually use the native libs because NativeCodeLoader is unable 
> to load native-hadoop. Eg run {{mvn compile -Pnative}} then {{mvn 
> -Dtest=TestSeekBug test -Pnative}} and check the test output. This is because 
> the test's java.library.path is looking for the lib in hdfs (
> hadoop-hdfs-project/hadoop-hdfs/target/native/target/usr/local/lib) however 
> the native lib lives in common. I confirmed copying the lib to the 
> appropriate directory fixes things. We need to update the java.library.path 
> for test execution to include the common lib dir.  This may be an issue with 
> MR as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3912) Detecting and avoiding stale datanodes for writing

2012-10-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13468015#comment-13468015
 ] 

Suresh Srinivas commented on HDFS-3912:
---

nicholas, did you mean to assign this to yourself?

> Detecting and avoiding stale datanodes for writing
> --
>
> Key: HDFS-3912
> URL: https://issues.apache.org/jira/browse/HDFS-3912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: nkeywal
> Attachments: HDFS-3912.001.patch, HDFS-3912.002.patch, 
> HDFS-3912.003.patch, HDFS-3912.004.patch
>
>
> 1. Make stale timeout adaptive to the number of nodes marked stale in the 
> cluster.
> 2. Consider having a separate configuration for write skipping the stale 
> nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3753) Tests don't run with native libraries

2012-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13467999#comment-13467999
 ] 

Hudson commented on HDFS-3753:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2864 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2864/])
HDFS-3753. Tests don't run with native libraries. Contributed by Colin 
Patrick McCabe (Revision 1393113)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1393113
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestHdfsNativeCodeLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestShortCircuitLocalRead.java
* /hadoop/common/trunk/hadoop-project/pom.xml


> Tests don't run with native libraries
> -
>
> Key: HDFS-3753
> URL: https://issues.apache.org/jira/browse/HDFS-3753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-3753.001.patch, HDFS-3753.002.patch
>
>
> Test execution when run with the native flag and native libraries have been 
> built don't actually use the native libs because NativeCodeLoader is unable 
> to load native-hadoop. Eg run {{mvn compile -Pnative}} then {{mvn 
> -Dtest=TestSeekBug test -Pnative}} and check the test output. This is because 
> the test's java.library.path is looking for the lib in hdfs (
> hadoop-hdfs-project/hadoop-hdfs/target/native/target/usr/local/lib) however 
> the native lib lives in common. I confirmed copying the lib to the 
> appropriate directory fixes things. We need to update the java.library.path 
> for test execution to include the common lib dir.  This may be an issue with 
> MR as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3753) Tests don't run with native libraries

2012-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13467998#comment-13467998
 ] 

Hudson commented on HDFS-3753:
--

Integrated in Hadoop-Common-trunk-Commit #2802 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2802/])
HDFS-3753. Tests don't run with native libraries. Contributed by Colin 
Patrick McCabe (Revision 1393113)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1393113
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestHdfsNativeCodeLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestShortCircuitLocalRead.java
* /hadoop/common/trunk/hadoop-project/pom.xml


> Tests don't run with native libraries
> -
>
> Key: HDFS-3753
> URL: https://issues.apache.org/jira/browse/HDFS-3753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-3753.001.patch, HDFS-3753.002.patch
>
>
> Test execution when run with the native flag and native libraries have been 
> built don't actually use the native libs because NativeCodeLoader is unable 
> to load native-hadoop. Eg run {{mvn compile -Pnative}} then {{mvn 
> -Dtest=TestSeekBug test -Pnative}} and check the test output. This is because 
> the test's java.library.path is looking for the lib in hdfs (
> hadoop-hdfs-project/hadoop-hdfs/target/native/target/usr/local/lib) however 
> the native lib lives in common. I confirmed copying the lib to the 
> appropriate directory fixes things. We need to update the java.library.path 
> for test execution to include the common lib dir.  This may be an issue with 
> MR as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3680) Allows customized audit logging in HDFS FSNamesystem

2012-10-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13467983#comment-13467983
 ] 

Suresh Srinivas commented on HDFS-3680:
---

Comments:
# The patch needs to document the newly added parameter. The document should 
include how to set it up, the expectation from the audit log implementation and 
the impact of this configuration when things do not work correctly.
# dfs.namenode.access.logger should be dfs.namenode.audit.logger
# TestAuditLogger - add javadoc and @link to the functionality being tested
# Minor - there is a mention of FSAccessLogger in DefaultAuditLogger javadoc
# what is the reason symlink is being done in logAuditEvent? Why is it a part 
of this jira?
# How does one add DefaultAuditLogger with a custom audit loggers? How does 
isAuditEnabled() method work if you add an ability to setup DefaultAuditLogger?
# java.security.principal unnecessary import in FSNamesystem.java
# FSNamesystem#auditLog should be moved to DefaultAuditLogger. Also why is 
auditLog used for logging in method getFileInfo and mkdirs still? Why not new 
auditloggers used for that?
# Should AuditLogger#logAuditEvent consider throwing IOException to indicate 
error?
# Sorry I have not caught up all the comments - what is the final decision on 
how to handle logger errors? Currently the client gets an exception when 
logAuditEvent fails. That does not seem to be correct.



> Allows customized audit logging in HDFS FSNamesystem
> 
>
> Key: HDFS-3680
> URL: https://issues.apache.org/jira/browse/HDFS-3680
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 2.0.0-alpha
>Reporter: Marcelo Vanzin
>Assignee: Marcelo Vanzin
>Priority: Minor
> Attachments: accesslogger-v1.patch, accesslogger-v2.patch, 
> hdfs-3680-v3.patch, hdfs-3680-v4.patch, hdfs-3680-v5.patch, 
> hdfs-3680-v6.patch, hdfs-3680-v7.patch
>
>
> Currently, FSNamesystem writes audit logs to a logger; that makes it easy to 
> get audit logs in some log file. But it makes it kinda tricky to store audit 
> logs in any other way (let's say a database), because it would require the 
> code to implement a log appender (and thus know what logging system is 
> actually being used underneath the façade), and parse the textual log message 
> generated by FSNamesystem.
> I'm attaching a patch that introduces a cleaner interface for this use case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-3996) Add debug log removed in HDFS-3873 back

2012-10-02 Thread Eli Collins (JIRA)
Eli Collins created HDFS-3996:
-

 Summary: Add debug log removed in HDFS-3873 back
 Key: HDFS-3996
 URL: https://issues.apache.org/jira/browse/HDFS-3996
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor


Per HDFS-3873 let's add the debug log back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3753) Tests don't run with native libraries

2012-10-02 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13467901#comment-13467901
 ] 

Eli Collins commented on HDFS-3753:
---

+1 looks great 

Nit: LD_LIBRARY_PATH shouldn't be indented more, I'll fix that when I commit

> Tests don't run with native libraries
> -
>
> Key: HDFS-3753
> URL: https://issues.apache.org/jira/browse/HDFS-3753
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-3753.001.patch, HDFS-3753.002.patch
>
>
> Test execution when run with the native flag and native libraries have been 
> built don't actually use the native libs because NativeCodeLoader is unable 
> to load native-hadoop. Eg run {{mvn compile -Pnative}} then {{mvn 
> -Dtest=TestSeekBug test -Pnative}} and check the test output. This is because 
> the test's java.library.path is looking for the lib in hdfs (
> hadoop-hdfs-project/hadoop-hdfs/target/native/target/usr/local/lib) however 
> the native lib lives in common. I confirmed copying the lib to the 
> appropriate directory fixes things. We need to update the java.library.path 
> for test execution to include the common lib dir.  This may be an issue with 
> MR as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3979) Fix hsync and hflush semantics.

2012-10-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13467882#comment-13467882
 ] 

Lars Hofhansl commented on HDFS-3979:
-

You don't think the existing pipeline tests cover the failure scenarios? 
I see if I can get some performance numbers.

> Fix hsync and hflush semantics.
> ---
>
> Key: HDFS-3979
> URL: https://issues.apache.org/jira/browse/HDFS-3979
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node, hdfs client
>Affects Versions: 0.22.0, 0.23.0, 2.0.0-alpha
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Attachments: hdfs-3979-sketch.txt, hdfs-3979-v2.txt
>
>
> See discussion in HDFS-744. The actual sync/flush operation in BlockReceiver 
> is not on a synchronous path from the DFSClient, hence it is possible that a 
> DN loses data that it has already acknowledged as persisted to a client.
> Edit: Spelling.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3912) Detecting and avoiding stale datanodes for writing

2012-10-02 Thread nkeywal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13467864#comment-13467864
 ] 

nkeywal commented on HDFS-3912:
---

I like this approach, it's deterministic.
I had issues trying branch 1.1 on HBase 0.96. Some (hbase) unit tests were not 
working with this branch. I was lacking time to understand why, but I will have 
a look again later (hopefully it will get fixed by just waiting...)

> Detecting and avoiding stale datanodes for writing
> --
>
> Key: HDFS-3912
> URL: https://issues.apache.org/jira/browse/HDFS-3912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: nkeywal
> Attachments: HDFS-3912.001.patch, HDFS-3912.002.patch, 
> HDFS-3912.003.patch, HDFS-3912.004.patch
>
>
> 1. Make stale timeout adaptive to the number of nodes marked stale in the 
> cluster.
> 2. Consider having a separate configuration for write skipping the stale 
> nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-3912) Detecting and avoiding stale datanodes for writing

2012-10-02 Thread nkeywal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nkeywal reassigned HDFS-3912:
-

Assignee: nkeywal  (was: Jing Zhao)

> Detecting and avoiding stale datanodes for writing
> --
>
> Key: HDFS-3912
> URL: https://issues.apache.org/jira/browse/HDFS-3912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: nkeywal
> Attachments: HDFS-3912.001.patch, HDFS-3912.002.patch, 
> HDFS-3912.003.patch, HDFS-3912.004.patch
>
>
> 1. Make stale timeout adaptive to the number of nodes marked stale in the 
> cluster.
> 2. Consider having a separate configuration for write skipping the stale 
> nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3993) The KSSL class should not limit the ssl ciphers

2012-10-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13467719#comment-13467719
 ] 

Daryn Sharp commented on HDFS-3993:
---

I seem to recall there's a java 6 bug that prevents using of non-DES algorithms 
(something about padding in the header).  We've had to remove AES from 
krb5.conf files due to this issue, so are you sure this works with java 6?

> The KSSL class should not limit the ssl ciphers
> ---
>
> Key: HDFS-3993
> URL: https://issues.apache.org/jira/browse/HDFS-3993
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: hdfs-3993.patch
>
>
> The KSSL class' static block currently limits the ssl ciphers to a single 
> value. It should use a much more permissive list.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3829) TestHftpURLTimeouts fails intermittently with JDK7

2012-10-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13467707#comment-13467707
 ] 

Daryn Sharp commented on HDFS-3829:
---

+1 Ok, I missed that.  It just seemed like more change to the tests than needed.

> TestHftpURLTimeouts fails intermittently with JDK7
> --
>
> Key: HDFS-3829
> URL: https://issues.apache.org/jira/browse/HDFS-3829
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.2-alpha
> Environment: Apache Maven 3.0.4
> Maven home: /usr/share/maven
> Java version: 1.7.0_04, vendor: Oracle Corporation
> Java home: /usr/lib/jvm/jdk1.7.0_04/jre
> Default locale: en_US, platform encoding: ISO-8859-1
> OS name: "linux", version: "3.2.0-25-generic", arch: "amd64", family: "unix"
>Reporter: Trevor Robinson
>Assignee: Trevor Robinson
>  Labels: java7
> Attachments: HDFS-3829.patch
>
>
> {{testHftpSocketTimeout}} fails if run after {{testHsftpSocketTimeout}}:
> {noformat}
> testHftpSocketTimeout(org.apache.hadoop.hdfs.TestHftpURLTimeouts): 
> expected: but was:
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3919) MiniDFSCluster:waitClusterUp can hang forever

2012-10-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13467663#comment-13467663
 ] 

Hudson commented on HDFS-3919:
--

Integrated in Hadoop-Hdfs-0.23-Build #392 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/392/])
svn merge -c 1383759 FIXES:  HDFS-3919. MiniDFSCluster:waitClusterUp can 
hang forever. Contributed by Andy Isaacson (Revision 1392475)

 Result = UNSTABLE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1392475
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


> MiniDFSCluster:waitClusterUp can hang forever
> -
>
> Key: HDFS-3919
> URL: https://issues.apache.org/jira/browse/HDFS-3919
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.1-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
>Priority: Minor
> Fix For: 2.0.3-alpha, 0.23.5
>
> Attachments: hdfs3919.txt
>
>
> A test run hung due to a known system config issue, but the hang was 
> interesting:
> {noformat}
> 2012-09-11 13:22:41,888 WARN  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitClusterUp(925)) - Waiting for the Mini HDFS Cluster 
> to start...
> 2012-09-11 13:22:42,889 WARN  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitClusterUp(925)) - Waiting for the Mini HDFS Cluster 
> to start...
> 2012-09-11 13:22:43,889 WARN  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitClusterUp(925)) - Waiting for the Mini HDFS Cluster 
> to start...
> 2012-09-11 13:22:44,890 WARN  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitClusterUp(925)) - Waiting for the Mini HDFS Cluster 
> to start...
> {noformat}
> The MiniDFSCluster should give up after a few seconds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira