[jira] [Updated] (HDFS-4245) Include snapshot related operations in OfflineEditsViewerHelper#runOperations() to fix test failures in TestOfflineEditsViewer

2013-01-10 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4245:


Attachment: editsStored
HDFS-4245.002.patch

Update the patch.

> Include snapshot related operations in 
> OfflineEditsViewerHelper#runOperations() to fix test failures in 
> TestOfflineEditsViewer
> --
>
> Key: HDFS-4245
> URL: https://issues.apache.org/jira/browse/HDFS-4245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: editsStored, editsStored, HDFS-4245.001.patch, 
> HDFS-4245.002.patch
>
>
> We have the following test failure for snapshot:
> {noformat}
> java.lang.AssertionError: Edits 
> /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-Snapshots-Branch-build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name/current/edits_001-059
>  should have all op codes
>   at org.junit.Assert.fail(Assert.java:91)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.__CLR3_0_2lvw0fw1wtx(TestOfflineEditsViewer.java:103)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.testGenerated(TestOfflineEditsViewer.java:86)
> ...
> {noformat}
> {noformat}
> java.lang.AssertionError: Edits 
> /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-Snapshots-Branch-build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/editsStored
>  should have all op codes
>   at org.junit.Assert.fail(Assert.java:91)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.__CLR3_0_2yzsf441wup(TestOfflineEditsViewer.java:171)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.testStored(TestOfflineEditsViewer.java:153)
> ...
> {noformat}
> We need to update OfflineEditsViewerHelper#runOperations() and editsStored to 
> include snapshot related operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4387) test_libhdfs_threaded SEGV on OpenJDK 7

2013-01-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550895#comment-13550895
 ] 

Colin Patrick McCabe commented on HDFS-4387:


I'm having trouble reproducing this.  I don't get this error when running 
{{test_libhdfs_threaded}}.

Here's what I'm running:

{code}
cmccabe@vm4:~/hadoop$ java -version
java version "1.7.0_09"
OpenJDK Runtime Environment (IcedTea7 2.3.3) (7u9-2.3.3-0ubuntu1~12.10.1)
OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode)

cmccabe@vm4:~/hadoop$ cat /etc/issue
Ubuntu 12.10 \n \l
{code}

> test_libhdfs_threaded SEGV on OpenJDK 7
> ---
>
> Key: HDFS-4387
> URL: https://issues.apache.org/jira/browse/HDFS-4387
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 3.0.0
>Reporter: Andy Isaacson
>Priority: Minor
>
> Building and running tests on OpenJDK 7 on Ubuntu 12.10 fails with {{mvn test 
> -Pnative}}.  The output is hard to decipher but the underlying issue is that 
> {{test_libhdfs_native}} segfaults at startup.
> {noformat}
> (gdb) run
> Starting program: 
> /mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/test_libhdfs_threaded
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
> Program received signal SIGSEGV, Segmentation fault.
> 0x7739a897 in attachJNIThread (name=0x0, is_daemon=is_daemon@entry=0 
> '\000', group=0x0) at thread.c:768
> 768 thread.c: No such file or directory.
> (gdb) where
> #0 0x7739a897 in attachJNIThread (name=0x0, 
> is_daemon=is_daemon@entry=0 '\000', group=0x0) at thread.c:768
> #1 0x77395020 in attachCurrentThread (is_daemon=0, args=0x0, 
> penv=0x7fffddb8) at jni.c:1454
> #2 Jam_AttachCurrentThread (vm=, penv=0x7fffddb8, 
> args=0x0) at jni.c:1466
> #3 0x77bcf979 in getGlobalJNIEnv () at 
> /mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:527
> #4 getJNIEnv () at 
> /mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:585
> #5 0x00402512 in nmdCreate (conf=conf@entry=0x7fffdeb0) at 
> /mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c:49
> #6 0x004016e1 in main () at 
> /mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c:283
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4385) Maven RAT plugin is not checking all source files

2013-01-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550815#comment-13550815
 ] 

Hadoop QA commented on HDFS-4385:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564346/HDFS-4385.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3821//console

This message is automatically generated.

> Maven RAT plugin is not checking all source files
> -
>
> Key: HDFS-4385
> URL: https://issues.apache.org/jira/browse/HDFS-4385
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>Priority: Critical
> Attachments: HDFS-4385-branch-0.23.patch, HDFS-4385.patch, 
> HDFS-4385-remove-branch23.sh, HDFS-4385-remove.sh
>
>
> HDFS side of HADOOP-9097
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4385) Maven RAT plugin is not checking all source files

2013-01-10 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HDFS-4385:


Attachment: HDFS-4385.patch
HDFS-4385-remove.sh

trunk patch. committer should just the remove.sh first then apply the patch.

Note testing I did was done in conjunction with all jira referenced in 
HADOOP-9097.

> Maven RAT plugin is not checking all source files
> -
>
> Key: HDFS-4385
> URL: https://issues.apache.org/jira/browse/HDFS-4385
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>Priority: Critical
> Attachments: HDFS-4385-branch-0.23.patch, HDFS-4385.patch, 
> HDFS-4385-remove-branch23.sh, HDFS-4385-remove.sh
>
>
> HDFS side of HADOOP-9097
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4385) Maven RAT plugin is not checking all source files

2013-01-10 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HDFS-4385:


Attachment: HDFS-4385-remove-branch23.sh
HDFS-4385-branch-0.23.patch

This is for branch-0.23. committer should first run 
HDFS-4385-remove-branch23.sh then apply the patch

> Maven RAT plugin is not checking all source files
> -
>
> Key: HDFS-4385
> URL: https://issues.apache.org/jira/browse/HDFS-4385
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.5
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>Priority: Critical
> Attachments: HDFS-4385-branch-0.23.patch, HDFS-4385.patch, 
> HDFS-4385-remove-branch23.sh, HDFS-4385-remove.sh
>
>
> HDFS side of HADOOP-9097
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4385) Maven RAT plugin is not checking all source files

2013-01-10 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HDFS-4385:


Status: Patch Available  (was: Open)

> Maven RAT plugin is not checking all source files
> -
>
> Key: HDFS-4385
> URL: https://issues.apache.org/jira/browse/HDFS-4385
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.5, 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>Priority: Critical
> Attachments: HDFS-4385-branch-0.23.patch, HDFS-4385.patch, 
> HDFS-4385-remove-branch23.sh, HDFS-4385-remove.sh
>
>
> HDFS side of HADOOP-9097
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4245) Include snapshot related operations in OfflineEditsViewerHelper#runOperations() to fix test failures in TestOfflineEditsViewer

2013-01-10 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4245:
-

Component/s: (was: datanode)
 (was: namenode)
 test
   Priority: Minor  (was: Major)

> Include snapshot related operations in 
> OfflineEditsViewerHelper#runOperations() to fix test failures in 
> TestOfflineEditsViewer
> --
>
> Key: HDFS-4245
> URL: https://issues.apache.org/jira/browse/HDFS-4245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: editsStored, HDFS-4245.001.patch
>
>
> We have the following test failure for snapshot:
> {noformat}
> java.lang.AssertionError: Edits 
> /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-Snapshots-Branch-build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name/current/edits_001-059
>  should have all op codes
>   at org.junit.Assert.fail(Assert.java:91)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.__CLR3_0_2lvw0fw1wtx(TestOfflineEditsViewer.java:103)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.testGenerated(TestOfflineEditsViewer.java:86)
> ...
> {noformat}
> {noformat}
> java.lang.AssertionError: Edits 
> /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-Snapshots-Branch-build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/editsStored
>  should have all op codes
>   at org.junit.Assert.fail(Assert.java:91)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.__CLR3_0_2yzsf441wup(TestOfflineEditsViewer.java:171)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.testStored(TestOfflineEditsViewer.java:153)
> ...
> {noformat}
> We need to update OfflineEditsViewerHelper#runOperations() and editsStored to 
> include snapshot related operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4245) Include snapshot related operations in OfflineEditsViewerHelper#runOperations() to fix test failures in TestOfflineEditsViewer

2013-01-10 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550798#comment-13550798
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4245:
--

Hi Jing, the patch is out-dated.  Could you update it?

> Include snapshot related operations in 
> OfflineEditsViewerHelper#runOperations() to fix test failures in 
> TestOfflineEditsViewer
> --
>
> Key: HDFS-4245
> URL: https://issues.apache.org/jira/browse/HDFS-4245
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: editsStored, HDFS-4245.001.patch
>
>
> We have the following test failure for snapshot:
> {noformat}
> java.lang.AssertionError: Edits 
> /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-Snapshots-Branch-build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name/current/edits_001-059
>  should have all op codes
>   at org.junit.Assert.fail(Assert.java:91)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.__CLR3_0_2lvw0fw1wtx(TestOfflineEditsViewer.java:103)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.testGenerated(TestOfflineEditsViewer.java:86)
> ...
> {noformat}
> {noformat}
> java.lang.AssertionError: Edits 
> /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-Snapshots-Branch-build/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/editsStored
>  should have all op codes
>   at org.junit.Assert.fail(Assert.java:91)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.__CLR3_0_2yzsf441wup(TestOfflineEditsViewer.java:171)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.testStored(TestOfflineEditsViewer.java:153)
> ...
> {noformat}
> We need to update OfflineEditsViewerHelper#runOperations() and editsStored to 
> include snapshot related operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4328) TestLargeBlock#testLargeBlockSize is timing out

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550795#comment-13550795
 ] 

Hudson commented on HDFS-4328:
--

Integrated in Hadoop-trunk-Commit #3216 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3216/])
HDFS-4328. TestLargeBlock#testLargeBlockSize is timing out. Contributed by 
Chris Nauroth. (Revision 1431867)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431867
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/DataTransferThrottler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLargeBlock.java


> TestLargeBlock#testLargeBlockSize is timing out
> ---
>
> Key: HDFS-4328
> URL: https://issues.apache.org/jira/browse/HDFS-4328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Chris Nauroth
> Attachments: HDFS-4328.1.patch
>
>
> For some time now TestLargeBlock#testLargeBlockSize has been timing out on 
> trunk.  It is getting hung up during cluster shutdown, and after 15 minutes 
> surefire kills it and causes the build to fail since it exited uncleanly.
> In addition to fixing the hang, we should consider adding a timeout parameter 
> to the @Test decorator for this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4328) TestLargeBlock#testLargeBlockSize is timing out

2013-01-10 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-4328:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've just committed this to trunk. Thanks a lot for the contribution, Chris.

> TestLargeBlock#testLargeBlockSize is timing out
> ---
>
> Key: HDFS-4328
> URL: https://issues.apache.org/jira/browse/HDFS-4328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Chris Nauroth
> Attachments: HDFS-4328.1.patch
>
>
> For some time now TestLargeBlock#testLargeBlockSize has been timing out on 
> trunk.  It is getting hung up during cluster shutdown, and after 15 minutes 
> surefire kills it and causes the build to fail since it exited uncleanly.
> In addition to fixing the hang, we should consider adding a timeout parameter 
> to the @Test decorator for this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4328) TestLargeBlock#testLargeBlockSize is timing out

2013-01-10 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550791#comment-13550791
 ] 

Aaron T. Myers commented on HDFS-4328:
--

+1, the patch looks good to me. I also manually ensured that the test passes 
without timing out after this change.

Thanks a lot for figuring this out and fixing it, Chris. Now we just need to 
get test-patch to notice timeouts so we catch these earlier. :)

> TestLargeBlock#testLargeBlockSize is timing out
> ---
>
> Key: HDFS-4328
> URL: https://issues.apache.org/jira/browse/HDFS-4328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Chris Nauroth
> Attachments: HDFS-4328.1.patch
>
>
> For some time now TestLargeBlock#testLargeBlockSize has been timing out on 
> trunk.  It is getting hung up during cluster shutdown, and after 15 minutes 
> surefire kills it and causes the build to fail since it exited uncleanly.
> In addition to fixing the hang, we should consider adding a timeout parameter 
> to the @Test decorator for this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4387) test_libhdfs_threaded SEGV on OpenJDK 7

2013-01-10 Thread Andy Isaacson (JIRA)
Andy Isaacson created HDFS-4387:
---

 Summary: test_libhdfs_threaded SEGV on OpenJDK 7
 Key: HDFS-4387
 URL: https://issues.apache.org/jira/browse/HDFS-4387
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 3.0.0
Reporter: Andy Isaacson
Priority: Minor


Building and running tests on OpenJDK 7 on Ubuntu 12.10 fails with {{mvn test 
-Pnative}}.  The output is hard to decipher but the underlying issue is that 
{{test_libhdfs_native}} segfaults at startup.

{noformat}
(gdb) run
Starting program: 
/mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/test_libhdfs_threaded
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

Program received signal SIGSEGV, Segmentation fault.
0x7739a897 in attachJNIThread (name=0x0, is_daemon=is_daemon@entry=0 
'\000', group=0x0) at thread.c:768
768 thread.c: No such file or directory.
(gdb) where
#0 0x7739a897 in attachJNIThread (name=0x0, is_daemon=is_daemon@entry=0 
'\000', group=0x0) at thread.c:768
#1 0x77395020 in attachCurrentThread (is_daemon=0, args=0x0, 
penv=0x7fffddb8) at jni.c:1454
#2 Jam_AttachCurrentThread (vm=, penv=0x7fffddb8, args=0x0) 
at jni.c:1466
#3 0x77bcf979 in getGlobalJNIEnv () at 
/mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:527
#4 getJNIEnv () at 
/mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c:585
#5 0x00402512 in nmdCreate (conf=conf@entry=0x7fffdeb0) at 
/mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c:49
#6 0x004016e1 in main () at 
/mnt/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c:283
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-10 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550786#comment-13550786
 ] 

Aaron T. Myers commented on HDFS-4261:
--

Gotcha, OK. I wasn't aware of that JIRA. Thanks for pointing it out.

Please make sure that whatever gets back-ported to branch-1 also gets 
back-ported to branch-2. I was under the impression that all of this stuff was 
only going to trunk.

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win,
>  test-balancer-with-node-group-timeout.txt
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-10 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550785#comment-13550785
 ] 

Junping Du commented on HDFS-4261:
--

Hi Aaron, Yes. The backport work is tracked by: 
https://issues.apache.org/jira/browse/HADOOP-8817.

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win,
>  test-balancer-with-node-group-timeout.txt
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4386) Backport HDFS-4261 to branch-1 to fix timeout in TestBalancerWithNodeGroup

2013-01-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du resolved HDFS-4386.
--

Resolution: Duplicate

Due to others' feedback, we should address this in HDFS-4261 for other branch.

> Backport HDFS-4261 to branch-1 to fix timeout in TestBalancerWithNodeGroup
> --
>
> Key: HDFS-4386
> URL: https://issues.apache.org/jira/browse/HDFS-4386
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Junping Du
>
> There are also observed Timeout in TestBalancerWithNodeGroup, like: 
> https://issues.apache.org/jira/browse/HBASE-7529?focusedCommentId=13549790&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13549790,
>  we should fix it as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-10 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550781#comment-13550781
 ] 

Aaron T. Myers commented on HDFS-4261:
--

Have any of the node group-related balancer changes been back-ported to 
branch-1? I was under the impression that none of them have even been 
back-ported to branch-2, let alone branch-1.

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win,
>  test-balancer-with-node-group-timeout.txt
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-10 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550773#comment-13550773
 ] 

Junping Du commented on HDFS-4261:
--

Oops. I already filed it before your comments...

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win,
>  test-balancer-with-node-group-timeout.txt
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4328) TestLargeBlock#testLargeBlockSize is timing out

2013-01-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550770#comment-13550770
 ] 

Hadoop QA commented on HDFS-4328:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564314/HDFS-4328.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3820//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3820//console

This message is automatically generated.

> TestLargeBlock#testLargeBlockSize is timing out
> ---
>
> Key: HDFS-4328
> URL: https://issues.apache.org/jira/browse/HDFS-4328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Chris Nauroth
> Attachments: HDFS-4328.1.patch
>
>
> For some time now TestLargeBlock#testLargeBlockSize has been timing out on 
> trunk.  It is getting hung up during cluster shutdown, and after 15 minutes 
> surefire kills it and causes the build to fail since it exited uncleanly.
> In addition to fixing the hang, we should consider adding a timeout parameter 
> to the @Test decorator for this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4386) Backport HDFS-4261 to branch-1 to fix timeout in TestBalancerWithNodeGroup

2013-01-10 Thread Junping Du (JIRA)
Junping Du created HDFS-4386:


 Summary: Backport HDFS-4261 to branch-1 to fix timeout in 
TestBalancerWithNodeGroup
 Key: HDFS-4386
 URL: https://issues.apache.org/jira/browse/HDFS-4386
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Junping Du


There are also observed Timeout in TestBalancerWithNodeGroup, like: 
https://issues.apache.org/jira/browse/HBASE-7529?focusedCommentId=13549790&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13549790,
 we should fix it as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550768#comment-13550768
 ] 

Suresh Srinivas commented on HDFS-4261:
---

Please attach a patch to this Jira. I will commit it. No need for a separate 
Jira for such a simple change. 




> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win,
>  test-balancer-with-node-group-timeout.txt
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4380) Opening a file for read before writer writes a block causes NPE

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550756#comment-13550756
 ] 

Hudson commented on HDFS-4380:
--

Integrated in HBase-0.94 #722 (See 
[https://builds.apache.org/job/HBase-0.94/722/])
HBASE-7530  [replication] Work around HDFS-4380 else we get NPEs
HBASE-7531  [replication] NPE in SequenceFileLogReader because
ReplicationSource doesn't nullify the reader
HBASE-7534  [replication] TestReplication.queueFailover can fail
because HBaseTestingUtility.createMultiRegions is dangerous 
(Revision 1431769)

 Result = SUCCESS
jdcryans : 
Files : 
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/replication/TestReplication.java


> Opening a file for read before writer writes a block causes NPE
> ---
>
> Key: HDFS-4380
> URL: https://issues.apache.org/jira/browse/HDFS-4380
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Todd Lipcon
>
> JD Cryans found this issue: it seems like, if you open a file for read 
> immediately after it's been created by the writer, after a block has been 
> allocated, but before the block is created on the DNs, then you can end up 
> with the following NPE:
> java.lang.NullPointerException
>at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.updateBlockInfo(DFSClient.java:1885)
>at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1858)
>at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.(DFSClient.java:1834)
>at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:578)
>at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:154)
> This seems to be because {{getBlockInfo}} returns a null block when the DN 
> doesn't yet have the replica. The client should probably either fall back to 
> a different replica or treat it as zero-length.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4385) Maven RAT plugin is not checking all source files

2013-01-10 Thread Thomas Graves (JIRA)
Thomas Graves created HDFS-4385:
---

 Summary: Maven RAT plugin is not checking all source files
 Key: HDFS-4385
 URL: https://issues.apache.org/jira/browse/HDFS-4385
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.5, 3.0.0, 2.0.3-alpha
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical


HDFS side of HADOOP-9097

Running 'mvn apache-rat:check' passes, but running RAT by hand (by downloading 
the JAR) produces some warnings for Java files, amongst others.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4380) Opening a file for read before writer writes a block causes NPE

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550747#comment-13550747
 ] 

Hudson commented on HDFS-4380:
--

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #342 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/342/])
HBASE-7530  [replication] Work around HDFS-4380 else we get NPEs
HBASE-7531  [replication] NPE in SequenceFileLogReader because
ReplicationSource doesn't nullify the reader
HBASE-7534  [replication] TestReplication.queueFailover can fail
because HBaseTestingUtility.createMultiRegions is dangerous 
(Revision 1431768)

 Result = FAILURE
jdcryans : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplication.java


> Opening a file for read before writer writes a block causes NPE
> ---
>
> Key: HDFS-4380
> URL: https://issues.apache.org/jira/browse/HDFS-4380
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Todd Lipcon
>
> JD Cryans found this issue: it seems like, if you open a file for read 
> immediately after it's been created by the writer, after a block has been 
> allocated, but before the block is created on the DNs, then you can end up 
> with the following NPE:
> java.lang.NullPointerException
>at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.updateBlockInfo(DFSClient.java:1885)
>at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1858)
>at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.(DFSClient.java:1834)
>at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:578)
>at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:154)
> This seems to be because {{getBlockInfo}} returns a null block when the DN 
> doesn't yet have the replica. The client should probably either fall back to 
> a different replica or treat it as zero-length.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4380) Opening a file for read before writer writes a block causes NPE

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550742#comment-13550742
 ] 

Hudson commented on HDFS-4380:
--

Integrated in HBase-TRUNK #3726 (See 
[https://builds.apache.org/job/HBase-TRUNK/3726/])
HBASE-7530  [replication] Work around HDFS-4380 else we get NPEs
HBASE-7531  [replication] NPE in SequenceFileLogReader because
ReplicationSource doesn't nullify the reader
HBASE-7534  [replication] TestReplication.queueFailover can fail
because HBaseTestingUtility.createMultiRegions is dangerous 
(Revision 1431768)

 Result = FAILURE
jdcryans : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplication.java


> Opening a file for read before writer writes a block causes NPE
> ---
>
> Key: HDFS-4380
> URL: https://issues.apache.org/jira/browse/HDFS-4380
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1.0.3
>Reporter: Todd Lipcon
>
> JD Cryans found this issue: it seems like, if you open a file for read 
> immediately after it's been created by the writer, after a block has been 
> allocated, but before the block is created on the DNs, then you can end up 
> with the following NPE:
> java.lang.NullPointerException
>at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.updateBlockInfo(DFSClient.java:1885)
>at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1858)
>at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.(DFSClient.java:1834)
>at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:578)
>at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:154)
> This seems to be because {{getBlockInfo}} returns a null block when the DN 
> doesn't yet have the replica. The client should probably either fall back to 
> a different replica or treat it as zero-length.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2908) Add apache license header for StorageReport.java

2013-01-10 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HDFS-2908:


Fix Version/s: 2.0.3-alpha

I pulled this into branch-2.

> Add apache license header for StorageReport.java
> 
>
> Key: HDFS-2908
> URL: https://issues.apache.org/jira/browse/HDFS-2908
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 0.24.0
>Reporter: Suresh Srinivas
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HDFS-2908.patch
>
>
> StorageReport.java added in HDFS-2899 is missing Apache license header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-10 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550735#comment-13550735
 ] 

Junping Du commented on HDFS-4261:
--

Thanks Ted to fix the typo. I will file a JIRA to backport this patch to 
branch-1.

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win,
>  test-balancer-with-node-group-timeout.txt
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4237) Add unit tests for HTTP-based filesystems against secure MiniDFSCluster

2013-01-10 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550736#comment-13550736
 ] 

Andy Isaacson commented on HDFS-4237:
-

A few tab characters crept into your patch.  Please take them out.

{code}
+// We ignore this test class because the test cases should be run only
+// when external Kdc parameters are specified. TestSecureWebHDFS
+// checks if these settings are set using JUnit4 Assume, which does
+// not work with TestCase (JUnit 3.x). Read GRADLE-1879 for more
+// information.
{code}
Given that this is going into hadoop-hdfs, I don't see why we need to support 
jUnit 3.

{code}
+TestWebHDFS.largeFileTest(conf, 200L << 20, secureUgi); //200MB file length
{code}
I'd rather see it written as 200 * 1024 * 1024 rather than using a bitshift.

Other than those issues, it seems reasonable.  Presumably this requires special 
permissions to run as root to get ports<1023, can we document that process 
somewhere?

> Add unit tests for HTTP-based filesystems against secure MiniDFSCluster
> ---
>
> Key: HDFS-4237
> URL: https://issues.apache.org/jira/browse/HDFS-4237
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: security, test, webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Stephen Chu
>Assignee: Stephen Chu
> Attachments: HDFS-4237.patch.001
>
>
> Now that we can start a secure MiniDFSCluster (HADOOP-9004), we need more 
> security unit tests.
> A good area to add secure tests is the HTTP-based filesystems (WebHDFS, 
> HttpFs).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4381) Add javadoc for FSImageFormat with description about the FSImage format with/without localNameINodes

2013-01-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550725#comment-13550725
 ] 

Hadoop QA commented on HDFS-4381:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564292/HDFS-4381.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3819//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3819//console

This message is automatically generated.

> Add javadoc for FSImageFormat with description about the FSImage format 
> with/without localNameINodes
> 
>
> Key: HDFS-4381
> URL: https://issues.apache.org/jira/browse/HDFS-4381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4381.001.patch, HDFS-4381.002.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4384) test_libhdfs_threaded gets SEGV if JNIEnv cannot be initialized

2013-01-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550722#comment-13550722
 ] 

Hadoop QA commented on HDFS-4384:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564290/HDFS-4384.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3818//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3818//console

This message is automatically generated.

> test_libhdfs_threaded gets SEGV if JNIEnv cannot be initialized
> ---
>
> Key: HDFS-4384
> URL: https://issues.apache.org/jira/browse/HDFS-4384
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4384.001.patch
>
>
> test_libhdfs_threaded gets a SEGV if JNIEnv cannot be initialized.  This is 
> incorrect; it should instead exit with a nice error message in this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4384) test_libhdfs_threaded gets SEGV if JNIEnv cannot be initialized

2013-01-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550715#comment-13550715
 ] 

Colin Patrick McCabe commented on HDFS-4384:


It's simpler than that.  (*env)->DeleteLocalRefs(...) fails because env is 
{{NULL}}.  The other error cases don't have this problem.

> test_libhdfs_threaded gets SEGV if JNIEnv cannot be initialized
> ---
>
> Key: HDFS-4384
> URL: https://issues.apache.org/jira/browse/HDFS-4384
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4384.001.patch
>
>
> test_libhdfs_threaded gets a SEGV if JNIEnv cannot be initialized.  This is 
> incorrect; it should instead exit with a nice error message in this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4328) TestLargeBlock#testLargeBlockSize is timing out

2013-01-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4328:


Status: Patch Available  (was: Open)

> TestLargeBlock#testLargeBlockSize is timing out
> ---
>
> Key: HDFS-4328
> URL: https://issues.apache.org/jira/browse/HDFS-4328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Chris Nauroth
> Attachments: HDFS-4328.1.patch
>
>
> For some time now TestLargeBlock#testLargeBlockSize has been timing out on 
> trunk.  It is getting hung up during cluster shutdown, and after 15 minutes 
> surefire kills it and causes the build to fail since it exited uncleanly.
> In addition to fixing the hang, we should consider adding a timeout parameter 
> to the @Test decorator for this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4328) TestLargeBlock#testLargeBlockSize is timing out

2013-01-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4328:


Attachment: HDFS-4328.1.patch

I'm attaching a patch.  {{DataTransferThrottler#throttle}} has been changed to 
abort the throttle and re-interrupt the thread instead of ignoring the 
{{InterruptedException}}.  The main loop for sending packets has been changed 
to terminate early if it gets interrupted.  I've also added a timeout to the 
@Test annotation in {{TestLargeBlock}}.

> TestLargeBlock#testLargeBlockSize is timing out
> ---
>
> Key: HDFS-4328
> URL: https://issues.apache.org/jira/browse/HDFS-4328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Chris Nauroth
> Attachments: HDFS-4328.1.patch
>
>
> For some time now TestLargeBlock#testLargeBlockSize has been timing out on 
> trunk.  It is getting hung up during cluster shutdown, and after 15 minutes 
> surefire kills it and causes the build to fail since it exited uncleanly.
> In addition to fixing the hang, we should consider adding a timeout parameter 
> to the @Test decorator for this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4237) Add unit tests for HTTP-based filesystems against secure MiniDFSCluster

2013-01-10 Thread Stephen Chu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550664#comment-13550664
 ] 

Stephen Chu commented on HDFS-4237:
---

Is anyone available to review the patch? It'd be much appreciated!

> Add unit tests for HTTP-based filesystems against secure MiniDFSCluster
> ---
>
> Key: HDFS-4237
> URL: https://issues.apache.org/jira/browse/HDFS-4237
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: security, test, webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Stephen Chu
>Assignee: Stephen Chu
> Attachments: HDFS-4237.patch.001
>
>
> Now that we can start a secure MiniDFSCluster (HADOOP-9004), we need more 
> security unit tests.
> A good area to add secure tests is the HTTP-based filesystems (WebHDFS, 
> HttpFs).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4381) Add javadoc for FSImageFormat with description about the FSImage format with/without localNameINodes

2013-01-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550650#comment-13550650
 ] 

Hadoop QA commented on HDFS-4381:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564288/HDFS-4381.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3817//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3817//console

This message is automatically generated.

> Add javadoc for FSImageFormat with description about the FSImage format 
> with/without localNameINodes
> 
>
> Key: HDFS-4381
> URL: https://issues.apache.org/jira/browse/HDFS-4381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4381.001.patch, HDFS-4381.002.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths

2013-01-10 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550640#comment-13550640
 ] 

Todd Lipcon commented on HDFS-4356:
---

{code}
+if (fisCache != null) {
+  LOG.debug("putting FileInputStream for " + filename +
+  " back into FileInputStreamCache");
+  fisCache.put(datanodeID, block, new FileInputStream[] {dataIn, 
checksumIn});
{code}

I think this is actually the code I meant to point to when I asked to add a 
guard. Copy-paste fail. *This* is on the hot path for reads. (also the debug 
just below this one)


{code}
+  = new ScheduledThreadPoolExecutor(1, new Daemon.DaemonFactory());
{code}
This only addressed half of the above comment. The thread needs a name. Check 
out ThreadFactoryBuilder from guava.


{code}
+  /**
+   * True if the FileInputStream is closed.
+   */
{code}
Bad doc -- should be: "True if the FileInputStreamCache has been closed"


> BlockReaderLocal should use passed file descriptors rather than paths
> -
>
> Key: HDFS-4356
> URL: https://issues.apache.org/jira/browse/HDFS-4356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 04b-cumulative.patch, _04b.patch, _04c.patch, 
> 04-cumulative.patch, 04d-cumulative.patch, _04e.patch, 04f-cumulative.patch, 
> 04g-cumulative.patch
>
>
> {{BlockReaderLocal}} should use file descriptors passed over UNIX domain 
> sockets rather than paths.  We also need some configuration options for these 
> UNIX domain sockets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4288) NN accepts incremental BR as IBR in safemode

2013-01-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550637#comment-13550637
 ] 

Hadoop QA commented on HDFS-4288:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564286/HDFS-4288.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestPersistBlocks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3816//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3816//console

This message is automatically generated.

> NN accepts incremental BR as IBR in safemode
> 
>
> Key: HDFS-4288
> URL: https://issues.apache.org/jira/browse/HDFS-4288
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-4288.branch-23.patch, HDFS-4288.patch
>
>
> If a DN is ready to send an incremental BR and the NN goes down, the DN will 
> repeatedly try to reconnect.  The NN will then process the DN's incremental 
> BR as an initial BR.  The NN now thinks the DN has only a few blocks, and 
> will ignore all subsequent BRs from that DN until out of safemode -- which it 
> may never do because of all the "missing" blocks on the affected DNs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4366) Block Replication Policy Implementation May Skip Higher-Priority Blocks for Lower-Priority Blocks

2013-01-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550621#comment-13550621
 ] 

Hadoop QA commented on HDFS-4366:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564282/HDFS-4366.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3815//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3815//console

This message is automatically generated.

> Block Replication Policy Implementation May Skip Higher-Priority Blocks for 
> Lower-Priority Blocks
> -
>
> Key: HDFS-4366
> URL: https://issues.apache.org/jira/browse/HDFS-4366
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 0.23.5
>Reporter: Derek Dagit
>Assignee: Derek Dagit
> Attachments: HDFS-4366.patch, hdfs-4366-unittest.patch
>
>
> In certain cases, higher-priority under-replicated blocks can be skipped by 
> the replication policy implementation.  The current implementation maintains, 
> for each priority level, an index into a list of blocks that are 
> under-replicated.  Together, the lists compose a priority queue (see note 
> later about branch-0.23).  In some cases when blocks are removed from a list, 
> the caller (BlockManager) properly handles the index into the list from which 
> it removed a block.  In some other cases, the index remains stationary while 
> the list changes.  Whenever this happens, and the removed block happened to 
> be at or before the index, the implementation will skip over a block when 
> selecting blocks for replication work.
> In situations when entire racks are decommissioned, leading to many 
> under-replicated blocks, loss of blocks can occur.
> Background: HDFS-1765
> This patch to trunk greatly improved the state of the replication policy 
> implementation.  Prior to the patch, the following details were true:
>   * The block "priority queue" was no such thing: It was really set of 
> trees that held blocks in natural ordering, that being by the blocks ID, 
> which resulted in iterator walks over the blocks in pseudo-random order.
>   * There was only a single index into an iteration over all of the 
> blocks...
>   * ... meaning the implementation was only successful in respecting 
> priority levels on the first pass.  Overall, the behavior was a 
> round-robin-type scheduling of blocks.
> After the patch
>   * A proper priority queue is implemented, preserving log n operations 
> while iterating over blocks in the order added.
>   * A separate index for each priority is key is kept...
>   * ... allowing for processing of the highest priority blocks first 
> regardless of which priority had last been processed.
> The change was suggested for branch-0.23 as well as trunk, but it does not 
> appear to have been pulled in.
> The problem:
> Although the indices are now tracked in a better way, there is a 
> synchronization issue since the indices are managed outside of methods to 
> modify the contents of the queue.
> Removal of a block from a priority level without adjusting the index can mean 
> that the index then points to the block after the block it originally pointed 
> to.  In the next round of scheduling for that priority level, the block 
> originally pointed to by the index is skipped.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4382) Fix typo MAX_NOT_CHANGED_INTERATIONS

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550607#comment-13550607
 ] 

Hudson commented on HDFS-4382:
--

Integrated in Hadoop-trunk-Commit #3215 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3215/])
HDFS-4382. Fix typo MAX_NOT_CHANGED_INTERATIONS. Contributed by Ted Yu. 
(Revision 1431726)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431726
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java


> Fix typo MAX_NOT_CHANGED_INTERATIONS
> 
>
> Key: HDFS-4382
> URL: https://issues.apache.org/jira/browse/HDFS-4382
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 3.0.0
>
> Attachments: hdfs-4382-v1.txt
>
>
> Here is an example:
> {code}
> +  if (notChangedIterations >= MAX_NOT_CHANGED_INTERATIONS) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4377) Some trivial DN comment cleanup

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550608#comment-13550608
 ] 

Hudson commented on HDFS-4377:
--

Integrated in Hadoop-trunk-Commit #3215 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3215/])
HDFS-4377. Some trivial DN comment cleanup. Contributed by Eli Collins 
(Revision 1431753)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431753
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java


> Some trivial DN comment cleanup
> ---
>
> Key: HDFS-4377
> URL: https://issues.apache.org/jira/browse/HDFS-4377
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Trivial
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-4377.txt, hdfs-4377.txt
>
>
> DataStorage.java
> - The "initilized" member is misspelled
> - Comment what the storageID member is
> DataNode.java
> - Cleanup createNewStorageId comment (should mention the port is included and 
> is overly verbose)
> BlockManager.java
> - TreeSet in the comment should be TreeMap

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4382) Fix typo MAX_NOT_CHANGED_INTERATIONS

2013-01-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550597#comment-13550597
 ] 

Hadoop QA commented on HDFS-4382:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564280/hdfs-4382-v1.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3814//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3814//console

This message is automatically generated.

> Fix typo MAX_NOT_CHANGED_INTERATIONS
> 
>
> Key: HDFS-4382
> URL: https://issues.apache.org/jira/browse/HDFS-4382
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 3.0.0
>
> Attachments: hdfs-4382-v1.txt
>
>
> Here is an example:
> {code}
> +  if (notChangedIterations >= MAX_NOT_CHANGED_INTERATIONS) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4377) Some trivial DN comment cleanup

2013-01-10 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-4377:
--

  Resolution: Fixed
   Fix Version/s: 2.0.3-alpha
Target Version/s:   (was: 2.0.3-alpha)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks Todd, I've committed this and merged to branch-2.

> Some trivial DN comment cleanup
> ---
>
> Key: HDFS-4377
> URL: https://issues.apache.org/jira/browse/HDFS-4377
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Trivial
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-4377.txt, hdfs-4377.txt
>
>
> DataStorage.java
> - The "initilized" member is misspelled
> - Comment what the storageID member is
> DataNode.java
> - Cleanup createNewStorageId comment (should mention the port is included and 
> is overly verbose)
> BlockManager.java
> - TreeSet in the comment should be TreeMap

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4384) test_libhdfs_threaded gets SEGV if JNIEnv cannot be initialized

2013-01-10 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550581#comment-13550581
 ] 

Eli Collins commented on HDFS-4384:
---

Given that the error case also returns null (and free(cl) is safe when cl is 
null), the DeleteLocalRefs are causing the segv? If so should they be guarded 
instead since other error cases would hit the same?

> test_libhdfs_threaded gets SEGV if JNIEnv cannot be initialized
> ---
>
> Key: HDFS-4384
> URL: https://issues.apache.org/jira/browse/HDFS-4384
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4384.001.patch
>
>
> test_libhdfs_threaded gets a SEGV if JNIEnv cannot be initialized.  This is 
> incorrect; it should instead exit with a nice error message in this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4377) Some trivial DN comment cleanup

2013-01-10 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550578#comment-13550578
 ] 

Todd Lipcon commented on HDFS-4377:
---

+1

> Some trivial DN comment cleanup
> ---
>
> Key: HDFS-4377
> URL: https://issues.apache.org/jira/browse/HDFS-4377
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Trivial
> Attachments: hdfs-4377.txt, hdfs-4377.txt
>
>
> DataStorage.java
> - The "initilized" member is misspelled
> - Comment what the storageID member is
> DataNode.java
> - Cleanup createNewStorageId comment (should mention the port is included and 
> is overly verbose)
> BlockManager.java
> - TreeSet in the comment should be TreeMap

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4381) Add javadoc for FSImageFormat with description about the FSImage format with/without localNameINodes

2013-01-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550575#comment-13550575
 ] 

Suresh Srinivas commented on HDFS-4381:
---

The new patch looks much better. Will wait for the Jenkins +1, before 
committing it.

> Add javadoc for FSImageFormat with description about the FSImage format 
> with/without localNameINodes
> 
>
> Key: HDFS-4381
> URL: https://issues.apache.org/jira/browse/HDFS-4381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4381.001.patch, HDFS-4381.002.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4381) Add javadoc for FSImageFormat with description about the FSImage format with/without localNameINodes

2013-01-10 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4381:


Attachment: HDFS-4381.002.patch

Update the patch to make it compact.

> Add javadoc for FSImageFormat with description about the FSImage format 
> with/without localNameINodes
> 
>
> Key: HDFS-4381
> URL: https://issues.apache.org/jira/browse/HDFS-4381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4381.001.patch, HDFS-4381.002.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4381) Add javadoc for FSImageFormat with description about the FSImage format with/without localNameINodes

2013-01-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550574#comment-13550574
 ] 

Suresh Srinivas commented on HDFS-4381:
---

Jing, thanks for documenting this. The javadoc can be compressed a bit by 
putting "{" in the same line etc. I feel that makes it better readable.

> Add javadoc for FSImageFormat with description about the FSImage format 
> with/without localNameINodes
> 
>
> Key: HDFS-4381
> URL: https://issues.apache.org/jira/browse/HDFS-4381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4381.001.patch, HDFS-4381.002.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4384) test_libhdfs_threaded gets SEGV if JNIEnv cannot be initialized

2013-01-10 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4384:
---

Attachment: HDFS-4384.001.patch

> test_libhdfs_threaded gets SEGV if JNIEnv cannot be initialized
> ---
>
> Key: HDFS-4384
> URL: https://issues.apache.org/jira/browse/HDFS-4384
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4384.001.patch
>
>
> test_libhdfs_threaded gets a SEGV if JNIEnv cannot be initialized.  This is 
> incorrect; it should instead exit with a nice error message in this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4384) test_libhdfs_threaded gets SEGV if JNIEnv cannot be initialized

2013-01-10 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4384:
---

Status: Patch Available  (was: Open)

> test_libhdfs_threaded gets SEGV if JNIEnv cannot be initialized
> ---
>
> Key: HDFS-4384
> URL: https://issues.apache.org/jira/browse/HDFS-4384
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4384.001.patch
>
>
> test_libhdfs_threaded gets a SEGV if JNIEnv cannot be initialized.  This is 
> incorrect; it should instead exit with a nice error message in this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4384) test_libhdfs_threaded gets SEGV if JNIEnv cannot be initialized

2013-01-10 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-4384:
--

 Summary: test_libhdfs_threaded gets SEGV if JNIEnv cannot be 
initialized
 Key: HDFS-4384
 URL: https://issues.apache.org/jira/browse/HDFS-4384
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


test_libhdfs_threaded gets a SEGV if JNIEnv cannot be initialized.  This is 
incorrect; it should instead exit with a nice error message in this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4382) Fix typo MAX_NOT_CHANGED_INTERATIONS

2013-01-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4382:
--

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

I committed the patch to trunk. Thank you Ted.

> Fix typo MAX_NOT_CHANGED_INTERATIONS
> 
>
> Key: HDFS-4382
> URL: https://issues.apache.org/jira/browse/HDFS-4382
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 3.0.0
>
> Attachments: hdfs-4382-v1.txt
>
>
> Here is an example:
> {code}
> +  if (notChangedIterations >= MAX_NOT_CHANGED_INTERATIONS) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4383) Document the lease limits

2013-01-10 Thread Eli Collins (JIRA)
Eli Collins created HDFS-4383:
-

 Summary: Document the lease limits
 Key: HDFS-4383
 URL: https://issues.apache.org/jira/browse/HDFS-4383
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Priority: Trivial


HdfsConstants.java or DFSClient/LeaseManager.java could use a comment 
indicating the behavior of hard and soft file lease limit periods.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4382) Fix typo MAX_NOT_CHANGED_INTERATIONS

2013-01-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4382:
--

Summary: Fix typo MAX_NOT_CHANGED_INTERATIONS  (was: Fix typo 
MAX_NOT_CHANGED_INTERATIONS when TestBalancerWithNodeGroup was fixed by 
HDFS-4261)

> Fix typo MAX_NOT_CHANGED_INTERATIONS
> 
>
> Key: HDFS-4382
> URL: https://issues.apache.org/jira/browse/HDFS-4382
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: hdfs-4382-v1.txt
>
>
> Here is an example:
> {code}
> +  if (notChangedIterations >= MAX_NOT_CHANGED_INTERATIONS) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4381) Add javadoc for FSImageFormat with description about the FSImage format with/without localNameINodes

2013-01-10 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4381:


Attachment: HDFS-4381.001.patch

> Add javadoc for FSImageFormat with description about the FSImage format 
> with/without localNameINodes
> 
>
> Key: HDFS-4381
> URL: https://issues.apache.org/jira/browse/HDFS-4381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4381.001.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4381) Add javadoc for FSImageFormat with description about the FSImage format with/without localNameINodes

2013-01-10 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4381:


Status: Patch Available  (was: Open)

> Add javadoc for FSImageFormat with description about the FSImage format 
> with/without localNameINodes
> 
>
> Key: HDFS-4381
> URL: https://issues.apache.org/jira/browse/HDFS-4381
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4381.001.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4288) NN accepts incremental BR as IBR in safemode

2013-01-10 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-4288:
--

Status: Patch Available  (was: Open)

> NN accepts incremental BR as IBR in safemode
> 
>
> Key: HDFS-4288
> URL: https://issues.apache.org/jira/browse/HDFS-4288
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 0.23.0, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-4288.branch-23.patch, HDFS-4288.patch
>
>
> If a DN is ready to send an incremental BR and the NN goes down, the DN will 
> repeatedly try to reconnect.  The NN will then process the DN's incremental 
> BR as an initial BR.  The NN now thinks the DN has only a few blocks, and 
> will ignore all subsequent BRs from that DN until out of safemode -- which it 
> may never do because of all the "missing" blocks on the affected DNs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4288) NN accepts incremental BR as IBR in safemode

2013-01-10 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-4288:
--

Attachment: HDFS-4288.patch

> NN accepts incremental BR as IBR in safemode
> 
>
> Key: HDFS-4288
> URL: https://issues.apache.org/jira/browse/HDFS-4288
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-4288.branch-23.patch, HDFS-4288.patch
>
>
> If a DN is ready to send an incremental BR and the NN goes down, the DN will 
> repeatedly try to reconnect.  The NN will then process the DN's incremental 
> BR as an initial BR.  The NN now thinks the DN has only a few blocks, and 
> will ignore all subsequent BRs from that DN until out of safemode -- which it 
> may never do because of all the "missing" blocks on the affected DNs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4288) NN accepts incremental BR as IBR in safemode

2013-01-10 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550541#comment-13550541
 ] 

Daryn Sharp commented on HDFS-4288:
---

The change does fix the DN restart issue for trunk/2.  Other than a few tweaks 
to enable testing, it's a 1-line change for trunk.  I'll post that patch, 
followed by an amended patch for 23.

> NN accepts incremental BR as IBR in safemode
> 
>
> Key: HDFS-4288
> URL: https://issues.apache.org/jira/browse/HDFS-4288
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-4288.branch-23.patch
>
>
> If a DN is ready to send an incremental BR and the NN goes down, the DN will 
> repeatedly try to reconnect.  The NN will then process the DN's incremental 
> BR as an initial BR.  The NN now thinks the DN has only a few blocks, and 
> will ignore all subsequent BRs from that DN until out of safemode -- which it 
> may never do because of all the "missing" blocks on the affected DNs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4366) Block Replication Policy Implementation May Skip Higher-Priority Blocks for Lower-Priority Blocks

2013-01-10 Thread Derek Dagit (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Dagit updated HDFS-4366:
--

Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

Not targeting for 0.23 since a number of other JIRAs, including HDFS-1765, are 
not yet in.

> Block Replication Policy Implementation May Skip Higher-Priority Blocks for 
> Lower-Priority Blocks
> -
>
> Key: HDFS-4366
> URL: https://issues.apache.org/jira/browse/HDFS-4366
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.5, 3.0.0
>Reporter: Derek Dagit
>Assignee: Derek Dagit
> Attachments: HDFS-4366.patch, hdfs-4366-unittest.patch
>
>
> In certain cases, higher-priority under-replicated blocks can be skipped by 
> the replication policy implementation.  The current implementation maintains, 
> for each priority level, an index into a list of blocks that are 
> under-replicated.  Together, the lists compose a priority queue (see note 
> later about branch-0.23).  In some cases when blocks are removed from a list, 
> the caller (BlockManager) properly handles the index into the list from which 
> it removed a block.  In some other cases, the index remains stationary while 
> the list changes.  Whenever this happens, and the removed block happened to 
> be at or before the index, the implementation will skip over a block when 
> selecting blocks for replication work.
> In situations when entire racks are decommissioned, leading to many 
> under-replicated blocks, loss of blocks can occur.
> Background: HDFS-1765
> This patch to trunk greatly improved the state of the replication policy 
> implementation.  Prior to the patch, the following details were true:
>   * The block "priority queue" was no such thing: It was really set of 
> trees that held blocks in natural ordering, that being by the blocks ID, 
> which resulted in iterator walks over the blocks in pseudo-random order.
>   * There was only a single index into an iteration over all of the 
> blocks...
>   * ... meaning the implementation was only successful in respecting 
> priority levels on the first pass.  Overall, the behavior was a 
> round-robin-type scheduling of blocks.
> After the patch
>   * A proper priority queue is implemented, preserving log n operations 
> while iterating over blocks in the order added.
>   * A separate index for each priority is key is kept...
>   * ... allowing for processing of the highest priority blocks first 
> regardless of which priority had last been processed.
> The change was suggested for branch-0.23 as well as trunk, but it does not 
> appear to have been pulled in.
> The problem:
> Although the indices are now tracked in a better way, there is a 
> synchronization issue since the indices are managed outside of methods to 
> modify the contents of the queue.
> Removal of a block from a priority level without adjusting the index can mean 
> that the index then points to the block after the block it originally pointed 
> to.  In the next round of scheduling for that priority level, the block 
> originally pointed to by the index is skipped.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4366) Block Replication Policy Implementation May Skip Higher-Priority Blocks for Lower-Priority Blocks

2013-01-10 Thread Derek Dagit (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Dagit updated HDFS-4366:
--

Attachment: HDFS-4366.patch

Patch to encapsulate the indices within UnderReplicatedBlocks, and clamp the 
index to zero if negative.

Any call to UnderReplicatedBlocks#remove will decrement the appropriate index 
for the priority.


> Block Replication Policy Implementation May Skip Higher-Priority Blocks for 
> Lower-Priority Blocks
> -
>
> Key: HDFS-4366
> URL: https://issues.apache.org/jira/browse/HDFS-4366
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 0.23.5
>Reporter: Derek Dagit
>Assignee: Derek Dagit
> Attachments: HDFS-4366.patch, hdfs-4366-unittest.patch
>
>
> In certain cases, higher-priority under-replicated blocks can be skipped by 
> the replication policy implementation.  The current implementation maintains, 
> for each priority level, an index into a list of blocks that are 
> under-replicated.  Together, the lists compose a priority queue (see note 
> later about branch-0.23).  In some cases when blocks are removed from a list, 
> the caller (BlockManager) properly handles the index into the list from which 
> it removed a block.  In some other cases, the index remains stationary while 
> the list changes.  Whenever this happens, and the removed block happened to 
> be at or before the index, the implementation will skip over a block when 
> selecting blocks for replication work.
> In situations when entire racks are decommissioned, leading to many 
> under-replicated blocks, loss of blocks can occur.
> Background: HDFS-1765
> This patch to trunk greatly improved the state of the replication policy 
> implementation.  Prior to the patch, the following details were true:
>   * The block "priority queue" was no such thing: It was really set of 
> trees that held blocks in natural ordering, that being by the blocks ID, 
> which resulted in iterator walks over the blocks in pseudo-random order.
>   * There was only a single index into an iteration over all of the 
> blocks...
>   * ... meaning the implementation was only successful in respecting 
> priority levels on the first pass.  Overall, the behavior was a 
> round-robin-type scheduling of blocks.
> After the patch
>   * A proper priority queue is implemented, preserving log n operations 
> while iterating over blocks in the order added.
>   * A separate index for each priority is key is kept...
>   * ... allowing for processing of the highest priority blocks first 
> regardless of which priority had last been processed.
> The change was suggested for branch-0.23 as well as trunk, but it does not 
> appear to have been pulled in.
> The problem:
> Although the indices are now tracked in a better way, there is a 
> synchronization issue since the indices are managed outside of methods to 
> modify the contents of the queue.
> Removal of a block from a priority level without adjusting the index can mean 
> that the index then points to the block after the block it originally pointed 
> to.  In the next round of scheduling for that priority level, the block 
> originally pointed to by the index is skipped.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4382) Fix typo MAX_NOT_CHANGED_INTERATIONS when HDFS-4261 was fixed

2013-01-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4382:
--

Affects Version/s: 3.0.0
Fix Version/s: (was: 3.0.0)

> Fix typo MAX_NOT_CHANGED_INTERATIONS when HDFS-4261 was fixed
> -
>
> Key: HDFS-4382
> URL: https://issues.apache.org/jira/browse/HDFS-4382
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: hdfs-4382-v1.txt
>
>
> Here is an example:
> {code}
> +  if (notChangedIterations >= MAX_NOT_CHANGED_INTERATIONS) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4382) Fix typo MAX_NOT_CHANGED_INTERATIONS when TestBalancerWithNodeGroup was fixed by HDFS-4261

2013-01-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-4382:
-

Summary: Fix typo MAX_NOT_CHANGED_INTERATIONS when 
TestBalancerWithNodeGroup was fixed by HDFS-4261  (was: Fix typo 
MAX_NOT_CHANGED_INTERATIONS when HDFS-4261 was fixed)

> Fix typo MAX_NOT_CHANGED_INTERATIONS when TestBalancerWithNodeGroup was fixed 
> by HDFS-4261
> --
>
> Key: HDFS-4382
> URL: https://issues.apache.org/jira/browse/HDFS-4382
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: hdfs-4382-v1.txt
>
>
> Here is an example:
> {code}
> +  if (notChangedIterations >= MAX_NOT_CHANGED_INTERATIONS) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4382) Fix typo MAX_NOT_CHANGED_INTERATIONS when HDFS-4261 was fixed

2013-01-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550507#comment-13550507
 ] 

Suresh Srinivas commented on HDFS-4382:
---

+1 for the patch.

> Fix typo MAX_NOT_CHANGED_INTERATIONS when HDFS-4261 was fixed
> -
>
> Key: HDFS-4382
> URL: https://issues.apache.org/jira/browse/HDFS-4382
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 3.0.0
>
> Attachments: hdfs-4382-v1.txt
>
>
> Here is an example:
> {code}
> +  if (notChangedIterations >= MAX_NOT_CHANGED_INTERATIONS) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4382) Fix typo MAX_NOT_CHANGED_INTERATIONS when HDFS-4261 was fixed

2013-01-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HDFS-4382:


Assignee: Ted Yu

> Fix typo MAX_NOT_CHANGED_INTERATIONS when HDFS-4261 was fixed
> -
>
> Key: HDFS-4382
> URL: https://issues.apache.org/jira/browse/HDFS-4382
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 3.0.0
>
> Attachments: hdfs-4382-v1.txt
>
>
> Here is an example:
> {code}
> +  if (notChangedIterations >= MAX_NOT_CHANGED_INTERATIONS) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4382) Fix typo MAX_NOT_CHANGED_INTERATIONS when HDFS-4261 was fixed

2013-01-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-4382:
-

Attachment: hdfs-4382-v1.txt

Patch that fixes the typo.

Also changed the constant in Balancer to private - it is private in 
NameNodeConnector

> Fix typo MAX_NOT_CHANGED_INTERATIONS when HDFS-4261 was fixed
> -
>
> Key: HDFS-4382
> URL: https://issues.apache.org/jira/browse/HDFS-4382
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
> Attachments: hdfs-4382-v1.txt
>
>
> Here is an example:
> {code}
> +  if (notChangedIterations >= MAX_NOT_CHANGED_INTERATIONS) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4382) Fix typo MAX_NOT_CHANGED_INTERATIONS when HDFS-4261 was fixed

2013-01-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-4382:
-

Fix Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> Fix typo MAX_NOT_CHANGED_INTERATIONS when HDFS-4261 was fixed
> -
>
> Key: HDFS-4382
> URL: https://issues.apache.org/jira/browse/HDFS-4382
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
> Fix For: 3.0.0
>
> Attachments: hdfs-4382-v1.txt
>
>
> Here is an example:
> {code}
> +  if (notChangedIterations >= MAX_NOT_CHANGED_INTERATIONS) {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4328) TestLargeBlock#testLargeBlockSize is timing out

2013-01-10 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-4328:
-

Target Version/s: 3.0.0

> TestLargeBlock#testLargeBlockSize is timing out
> ---
>
> Key: HDFS-4328
> URL: https://issues.apache.org/jira/browse/HDFS-4328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Chris Nauroth
>
> For some time now TestLargeBlock#testLargeBlockSize has been timing out on 
> trunk.  It is getting hung up during cluster shutdown, and after 15 minutes 
> surefire kills it and causes the build to fail since it exited uncleanly.
> In addition to fixing the hang, we should consider adding a timeout parameter 
> to the @Test decorator for this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4328) TestLargeBlock#testLargeBlockSize is timing out

2013-01-10 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550502#comment-13550502
 ] 

Aaron T. Myers commented on HDFS-4328:
--

Makes sense, Chris. Thanks a lot for the investigation.

> TestLargeBlock#testLargeBlockSize is timing out
> ---
>
> Key: HDFS-4328
> URL: https://issues.apache.org/jira/browse/HDFS-4328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Chris Nauroth
>
> For some time now TestLargeBlock#testLargeBlockSize has been timing out on 
> trunk.  It is getting hung up during cluster shutdown, and after 15 minutes 
> surefire kills it and causes the build to fail since it exited uncleanly.
> In addition to fixing the hang, we should consider adding a timeout parameter 
> to the @Test decorator for this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550501#comment-13550501
 ] 

Ted Yu commented on HDFS-4261:
--

Created HDFS-4382.

Will upload a patch soon.

Is there plan to fix hanging TestBalancerWithNodeGroup in hadoop 1.1 ?

See HBase QA report:
https://issues.apache.org/jira/browse/HBASE-7529?focusedCommentId=13549790&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13549790

-1 core zombie tests. There are 1 zombie test(s): at 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup.testBalancerWithRackLocality(TestBalancerWithNodeGroup.java:220)

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win,
>  test-balancer-with-node-group-timeout.txt
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4382) Fix typo MAX_NOT_CHANGED_INTERATIONS when HDFS-4261 was fixed

2013-01-10 Thread Ted Yu (JIRA)
Ted Yu created HDFS-4382:


 Summary: Fix typo MAX_NOT_CHANGED_INTERATIONS when HDFS-4261 was 
fixed
 Key: HDFS-4382
 URL: https://issues.apache.org/jira/browse/HDFS-4382
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu


Here is an example:
{code}
+  if (notChangedIterations >= MAX_NOT_CHANGED_INTERATIONS) {
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-10 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550490#comment-13550490
 ] 

Aaron T. Myers commented on HDFS-4261:
--

Whoops! Good catch, Ted. Want to file a JIRA?

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win,
>  test-balancer-with-node-group-timeout.txt
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4328) TestLargeBlock#testLargeBlockSize is timing out

2013-01-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550470#comment-13550470
 ] 

Chris Nauroth commented on HDFS-4328:
-

My last comment wasn't entirely correct.  It's not an infinite loop.  It's 
making progress, but this test uses a very large block (>2 GB), and the 
throttler is just doing its job limiting bandwidth consumption.

I think the real problem is that {{DataTransferThrottler#throttle}} is ignoring 
an {{InterruptedException}}.  {{DataBlockScanner}} interrupts its thread to 
signal shutdown, so if that thread happens to be waiting in 
{{DataTransferThrottler#throttle}} at the time of interruption, then the signal 
gets ignored, and block verification just keeps on running.

I'll work on a patch to handle the {{InterruptedException}} by aborting any 
block verification in progress and allowing shutdown to proceed.  (That's 
effectively what the behavior was before HDFS-4274, because the thread was a 
daemon.)


> TestLargeBlock#testLargeBlockSize is timing out
> ---
>
> Key: HDFS-4328
> URL: https://issues.apache.org/jira/browse/HDFS-4328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Chris Nauroth
>
> For some time now TestLargeBlock#testLargeBlockSize has been timing out on 
> trunk.  It is getting hung up during cluster shutdown, and after 15 minutes 
> surefire kills it and causes the build to fail since it exited uncleanly.
> In addition to fixing the hang, we should consider adding a timeout parameter 
> to the @Test decorator for this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4333) Using right default value for creating files in HDFS

2013-01-10 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HDFS-4333.
--

Resolution: Duplicate

Sounds good. Closing this issue as a duplicate then.

> Using right default value for creating files in HDFS
> 
>
> Key: HDFS-4333
> URL: https://issues.apache.org/jira/browse/HDFS-4333
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
>
> The default permission to create file should be 0666 rather than 0777, 
> HADOOP-9155 add default permission for files and change 
> localfilesystem.create to use this default value, this jira makes the similar 
> change with hdfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths

2013-01-10 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4356:
---

Attachment: _04e.patch

* changed hashCode to just use Block, as Todd suggested
* actually use {{BlockReader#available()}} (good catch, Todd.)

> BlockReaderLocal should use passed file descriptors rather than paths
> -
>
> Key: HDFS-4356
> URL: https://issues.apache.org/jira/browse/HDFS-4356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 04b-cumulative.patch, _04b.patch, _04c.patch, 
> 04-cumulative.patch, 04d-cumulative.patch, _04e.patch, 04f-cumulative.patch, 
> 04g-cumulative.patch
>
>
> {{BlockReaderLocal}} should use file descriptors passed over UNIX domain 
> sockets rather than paths.  We also need some configuration options for these 
> UNIX domain sockets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths

2013-01-10 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4356:
---

Attachment: _04c.patch

> BlockReaderLocal should use passed file descriptors rather than paths
> -
>
> Key: HDFS-4356
> URL: https://issues.apache.org/jira/browse/HDFS-4356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 04b-cumulative.patch, _04b.patch, _04c.patch, 
> 04-cumulative.patch, 04d-cumulative.patch, 04f-cumulative.patch, 
> 04g-cumulative.patch
>
>
> {{BlockReaderLocal}} should use file descriptors passed over UNIX domain 
> sockets rather than paths.  We also need some configuration options for these 
> UNIX domain sockets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths

2013-01-10 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4356:
---

Attachment: (was: _04c.patch)

> BlockReaderLocal should use passed file descriptors rather than paths
> -
>
> Key: HDFS-4356
> URL: https://issues.apache.org/jira/browse/HDFS-4356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 04b-cumulative.patch, _04b.patch, _04c.patch, 
> 04-cumulative.patch, 04d-cumulative.patch, 04f-cumulative.patch, 
> 04g-cumulative.patch
>
>
> {{BlockReaderLocal}} should use file descriptors passed over UNIX domain 
> sockets rather than paths.  We also need some configuration options for these 
> UNIX domain sockets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths

2013-01-10 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4356:
---

Attachment: _04c.patch

> BlockReaderLocal should use passed file descriptors rather than paths
> -
>
> Key: HDFS-4356
> URL: https://issues.apache.org/jira/browse/HDFS-4356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 04b-cumulative.patch, _04b.patch, _04c.patch, 
> 04-cumulative.patch, 04d-cumulative.patch, 04f-cumulative.patch, 
> 04g-cumulative.patch
>
>
> {{BlockReaderLocal}} should use file descriptors passed over UNIX domain 
> sockets rather than paths.  We also need some configuration options for these 
> UNIX domain sockets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths

2013-01-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13550368#comment-13550368
 ] 

Colin Patrick McCabe commented on HDFS-4356:


bq. No need to log this here, especailly at ERROR level. The exception will get 
logged in the catch clause at the callsite already.

changed to debug

bq. Can you add an isDebugEnabled log here? It's on the hot path for random 
read.

This is the fallthrough case if we didn't understand the reply.  This is only 
added as a "just in case"-- it's never intended to actually happen.  If 
anything was on the hot path it might be {{ERROR_UNSUPPORTED}} case, but that 
seems a lot like a misconfiguration to me.  In any case, we only try once every 
15 minutes or something, so I doubt one log message every 15 minutes on 
misconfigured clusters will cause issues.

bq. why'd you remove this comment?

It was replaced by this comment:
{code}
// We retry several times here.
// On the first nCachedConnRetry times, we try to fetch a socket from
// the socketCache and use it.  This may fail, since the old socket may
// have been closed by the peer.
// After that, we try to create a new socket using newPeer().
// This may create either a TCP socket or a UNIX domain socket, depending
// on the configuration and whether the peer is remote.
// If we try to create a UNIX domain socket and fail, we will not try that 
// again.  Instead, we'll try to create a TCP socket.  Only after we've 
// failed to create a TCP-based BlockReader will we throw an IOException
// from this function.  Throwing an IOException from here is basically
// equivalent to declaring the DataNode bad.
{code}

Let me know if you have ideas for improving this text...

bq. [FileInputStreamCache comments]

* Added JavaDoc
* fixed remove bug
* fixed {{FileInputStreamCache#Key#equals}}
* IOUtils.cleanup can just be passed an array

bq. Find a better spot for this? Seems like not quite the right place.

Hmm, where would be a better spot?  We already tried in {{Block}}, then 
{{ExtendedBlock}}.  I think it's not a big deal until we have another format 
version (like combined checksums + data).

> BlockReaderLocal should use passed file descriptors rather than paths
> -
>
> Key: HDFS-4356
> URL: https://issues.apache.org/jira/browse/HDFS-4356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 04b-cumulative.patch, _04b.patch, 04-cumulative.patch, 
> 04d-cumulative.patch, 04f-cumulative.patch, 04g-cumulative.patch
>
>
> {{BlockReaderLocal}} should use file descriptors passed over UNIX domain 
> sockets rather than paths.  We also need some configuration options for these 
> UNIX domain sockets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4381) Add javadoc for FSImageFormat with description about the FSImage format with/without localNameINodes

2013-01-10 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-4381:
---

 Summary: Add javadoc for FSImageFormat with description about the 
FSImage format with/without localNameINodes
 Key: HDFS-4381
 URL: https://issues.apache.org/jira/browse/HDFS-4381
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes

2013-01-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549959#comment-13549959
 ] 

Colin Patrick McCabe commented on HDFS-4353:


This patch refactors some code in the {{DFSClient}} and the DataNode's 
{{DataXceiver}}.  The refactor encapsulates connections to peers into a single 
class named{{Peer}}.

Suresh, please excuse me if I'm covering things you already know, but I want to 
give some context to random people reading this JIRA.  Java has no standard 
 mechanism for setting write timeouts on blocking sockets.  So we usually 
wrap our sockets in {{org.apache.hadoop.net.SocketOutputStream}}.  This class 
sets the  {{Socket}} to nonblocking and simulates blocking I/O with a 
timeout.  (There is also a parallel 
{{org.apache.hadoop.net.SocketInputStream}}.)  However, we can't *   always* do 
this, since some Sockets cannot be used in non-blocking mode-- for example, the 
SOCKS sockets classes don't support this.  The other thing that we do a   lot 
of is wrapping output and input streams in encrypted streams.

The end result of this is that we end up passing around a lot of objects just 
to represent a single connection to a Peer.  {{IOStreamPair}} is a good example 
of this.  We also end up using {{instanceof}} a lot because we're dealing 
with types that don't have a common ancestor.  This refactor encapsulates all 
of thos objects in a single object, the {{Peer}}.  This avoids the need to use 
{{instanceof}} to set socket timeouts and other properties.

The main reason for doing this refactor now is that {{DomainSocket}}, which is 
introduced by HDFS-4354, doesn't inherit from {{Socket}}.  We made the decision 
not   to inherit from {{Socket}} because inheriting would require us to rely on 
non-public JVM classes.  There is more discsussion on HDFS-347 about this 
issue, if you're curious.

Specific changes:

{{PeerServer}}: a class that creates {{Peers}}.  {{TcpPeerServer}} is basically 
a wrapper around {{ServerSocket}}.  The next patch introduces another subclass, 
 {{DomainPeerServer}}.

{{BlockReader#close}}: now returns the Peer to the PeerCache directly.  This 
replaces the multi-step process involving {{hasSentStatusCode}}, 
{{takeSocket}}, and{{getStreams}}. 

{{SocketCache}}: was renamed to {{PeerCache}}.  Now caches based on 
{{DatanodeID}} rather than socket address.  This is needed to prepare the way 
for puttingDomainSockets into the cache.  Aside from that it should be 
very similar.


> Encapsulate connections to peers in Peer and PeerServer classes
> ---
>
> Key: HDFS-4353
> URL: https://issues.apache.org/jira/browse/HDFS-4353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: _02a.patch, 02b-cumulative.patch, 02c.patch, 02c.patch, 
> 02-cumulative.patch, 02d.patch, 02e.patch, 02f.patch
>
>
> Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} 
> classes.  Since many Java classes may be involved with these connections, it 
> makes sense to create a container for them.  For example, a connection to a 
> peer may have an input stream, output stream, readablebytechannel, encrypted 
> output stream, and encrypted input stream associated with it.
> This makes us less dependent on the {{NetUtils}} methods which use 
> {{instanceof}} to manipulate socket and stream states based on the runtime 
> type.  it also paves the way to introduce UNIX domain sockets which don't 
> inherit from {{java.net.Socket}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4377) Some trivial DN comment cleanup

2013-01-10 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549946#comment-13549946
 ] 

Eli Collins commented on HDFS-4377:
---

Test failures are unrelated. Good to go Todd?

> Some trivial DN comment cleanup
> ---
>
> Key: HDFS-4377
> URL: https://issues.apache.org/jira/browse/HDFS-4377
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Trivial
> Attachments: hdfs-4377.txt, hdfs-4377.txt
>
>
> DataStorage.java
> - The "initilized" member is misspelled
> - Comment what the storageID member is
> DataNode.java
> - Cleanup createNewStorageId comment (should mention the port is included and 
> is overly verbose)
> BlockManager.java
> - TreeSet in the comment should be TreeMap

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3429) DataNode reads checksums even if client does not need them

2013-01-10 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549902#comment-13549902
 ] 

Todd Lipcon commented on HDFS-3429:
---

Great, thanks Liang for the help with testing! I think this needs to be rebased 
a little bit before it's committed, but I'll work on it.

> DataNode reads checksums even if client does not need them
> --
>
> Key: HDFS-3429
> URL: https://issues.apache.org/jira/browse/HDFS-3429
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-3429-0.20.2.patch, hdfs-3429-0.20.2.patch, 
> hdfs-3429.txt, hdfs-3429.txt, hdfs-3429.txt
>
>
> Currently, even if the client does not want to verify checksums, the datanode 
> reads them anyway and sends them over the wire. This means that performance 
> improvements like HBase's application-level checksums don't have much benefit 
> when reading through the datanode, since the DN is still causing seeks into 
> the checksum file.
> (Credit goes to Dhruba for discovering this - filing on his behalf)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4328) TestLargeBlock#testLargeBlockSize is timing out

2013-01-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reassigned HDFS-4328:
---

Assignee: Chris Nauroth

> TestLargeBlock#testLargeBlockSize is timing out
> ---
>
> Key: HDFS-4328
> URL: https://issues.apache.org/jira/browse/HDFS-4328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Chris Nauroth
>
> For some time now TestLargeBlock#testLargeBlockSize has been timing out on 
> trunk.  It is getting hung up during cluster shutdown, and after 15 minutes 
> surefire kills it and causes the build to fail since it exited uncleanly.
> In addition to fixing the hang, we should consider adding a timeout parameter 
> to the @Test decorator for this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4328) TestLargeBlock#testLargeBlockSize is timing out

2013-01-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549886#comment-13549886
 ] 

Chris Nauroth commented on HDFS-4328:
-

Thread dumps show the test hanging when {{DataBlockScanner#shutdown}} tries to 
join with the {{blockScannerThread}}:

{noformat}
"main" prio=5 tid=7fd86d800800 nid=0x10efc1000 in Object.wait() [10efbe000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1210)
- locked <7c3965cd8> (a java.lang.Thread)
at java.lang.Thread.join(Thread.java:1263)
at 
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.shutdown(DataBlockScanner.java:251)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownDataBlockScanner(DataNode.java:490)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownPeriodicScanners(DataNode.java:462)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:1104)
...
{noformat}

Meanwhile in the {{blockScannerThread}}, it's stuck in an infinite wait loop in 
{{DataTransferThrottler#throttle}}:

{noformat}
"Thread-60" daemon prio=5 tid=7fd86c1a6800 nid=0x11c378000 in Object.wait() 
[11c377000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at 
org.apache.hadoop.hdfs.util.DataTransferThrottler.throttle(DataTransferThrottler.java:98)
- locked <7c3c841a0> (a 
org.apache.hadoop.hdfs.util.DataTransferThrottler)
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:526)
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:653)
at 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:397)
at 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:476)
at 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:633)
at 
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scanBlockPoolSlice(BlockPoolSliceScanner.java:599)
at 
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:101)
...
{noformat}

It's likely that this infinite loop problem existed before the HDFS-4274 patch, 
but {{blockScannerThread}} was a daemon thread, so it didn't block datanode 
shutdown.  With the HDFS-4274 patch, datanode shutdown now joins to this thread 
and waits for it to finish, causing it to block datanode shutdown.

I need to keep investigating why {{DataTransferThrottler#throttle}} is stuck in 
an infinite wait loop.


> TestLargeBlock#testLargeBlockSize is timing out
> ---
>
> Key: HDFS-4328
> URL: https://issues.apache.org/jira/browse/HDFS-4328
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>
> For some time now TestLargeBlock#testLargeBlockSize has been timing out on 
> trunk.  It is getting hung up during cluster shutdown, and after 15 minutes 
> surefire kills it and causes the build to fail since it exited uncleanly.
> In addition to fixing the hang, we should consider adding a timeout parameter 
> to the @Test decorator for this test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4274) BlockPoolSliceScanner does not close verification log during shutdown

2013-01-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549876#comment-13549876
 ] 

Chris Nauroth commented on HDFS-4274:
-

Aaron, thanks for the note here so that I would notice the new jira.  I'm going 
to enter some preliminary analysis on HDFS-4328.


> BlockPoolSliceScanner does not close verification log during shutdown
> -
>
> Key: HDFS-4274
> URL: https://issues.apache.org/jira/browse/HDFS-4274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HDFS-4274.1.patch, HDFS-4274.2.patch
>
>
> {{BlockPoolSliceScanner}} holds open a handle to a verification log.  This 
> file is not getting closed during process shutdown.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-10 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549833#comment-13549833
 ] 

Ted Yu commented on HDFS-4261:
--

Minor:
{code}
+  if (notChangedIterations >= MAX_NOT_CHANGED_INTERATIONS) {
{code}
Looks the constant is misspelled - an extra N following I.

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win,
>  test-balancer-with-node-group-timeout.txt
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2757) Cannot read a local block that's being written to when using the local read short circuit

2013-01-10 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-2757:
-

Fix Version/s: 0.23.6

> Cannot read a local block that's being written to when using the local read 
> short circuit
> -
>
> Key: HDFS-2757
> URL: https://issues.apache.org/jira/browse/HDFS-2757
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Jean-Daniel Cryans
>Assignee: Jean-Daniel Cryans
> Fix For: 1.2.0, 2.0.2-alpha, 0.23.6
>
> Attachments: hdfs-2757-b1.txt, HDFS-2757-branch-1.patch, 
> HDFS-2757-branch-1-v2.patch, HDFS-2757-trunk.patch, hdfs-2757.txt
>
>
> When testing the tail'ing of a local file with the read short circuit on, I 
> get:
> {noformat}
> 2012-01-06 00:17:31,598 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal requested with incorrect offset:  Offset 0 and length 
> 8230400 don't match block blk_-2842916025951313698_454072 ( blockLen 124 )
> 2012-01-06 00:17:31,598 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal: Removing blk_-2842916025951313698_454072 from cache because 
> local file 
> /export4/jdcryans/dfs/data/blocksBeingWritten/blk_-2842916025951313698 could 
> not be opened.
> 2012-01-06 00:17:31,599 INFO org.apache.hadoop.hdfs.DFSClient: Failed to read 
> block blk_-2842916025951313698_454072 on local machine java.io.IOException:  
> Offset 0 and length 8230400 don't match block blk_-2842916025951313698_454072 
> ( blockLen 124 )
> 2012-01-06 00:17:31,599 INFO org.apache.hadoop.hdfs.DFSClient: Try reading 
> via the datanode on /10.4.13.38:51010
> java.io.EOFException: 
> hdfs://sv4r11s38:9100/hbase-1/.logs/sv4r13s38,62023,1325808100311/sv4r13s38%2C62023%2C1325808100311.1325808100818,
>  entryStart=7190409, pos=8230400, end=8230400, edit=5
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2757) Cannot read a local block that's being written to when using the local read short circuit

2013-01-10 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549819#comment-13549819
 ] 

Kihwal Lee commented on HDFS-2757:
--

Pulled into branch-0.23 (23.6).

> Cannot read a local block that's being written to when using the local read 
> short circuit
> -
>
> Key: HDFS-2757
> URL: https://issues.apache.org/jira/browse/HDFS-2757
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Jean-Daniel Cryans
>Assignee: Jean-Daniel Cryans
> Fix For: 1.2.0, 2.0.2-alpha
>
> Attachments: hdfs-2757-b1.txt, HDFS-2757-branch-1.patch, 
> HDFS-2757-branch-1-v2.patch, HDFS-2757-trunk.patch, hdfs-2757.txt
>
>
> When testing the tail'ing of a local file with the read short circuit on, I 
> get:
> {noformat}
> 2012-01-06 00:17:31,598 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal requested with incorrect offset:  Offset 0 and length 
> 8230400 don't match block blk_-2842916025951313698_454072 ( blockLen 124 )
> 2012-01-06 00:17:31,598 WARN org.apache.hadoop.hdfs.DFSClient: 
> BlockReaderLocal: Removing blk_-2842916025951313698_454072 from cache because 
> local file 
> /export4/jdcryans/dfs/data/blocksBeingWritten/blk_-2842916025951313698 could 
> not be opened.
> 2012-01-06 00:17:31,599 INFO org.apache.hadoop.hdfs.DFSClient: Failed to read 
> block blk_-2842916025951313698_454072 on local machine java.io.IOException:  
> Offset 0 and length 8230400 don't match block blk_-2842916025951313698_454072 
> ( blockLen 124 )
> 2012-01-06 00:17:31,599 INFO org.apache.hadoop.hdfs.DFSClient: Try reading 
> via the datanode on /10.4.13.38:51010
> java.io.EOFException: 
> hdfs://sv4r11s38:9100/hbase-1/.logs/sv4r13s38,62023,1325808100311/sv4r13s38%2C62023%2C1325808100311.1325808100818,
>  entryStart=7190409, pos=8230400, end=8230400, edit=5
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-3636) TestMetricsSystemImpl#testInitFirst failed

2013-01-10 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved HDFS-3636.
---

Resolution: Duplicate

Looks like HADOOP-8981 fixes this. Resolving this as a dupe.

> TestMetricsSystemImpl#testInitFirst failed
> --
>
> Key: HDFS-3636
> URL: https://issues.apache.org/jira/browse/HDFS-3636
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eli Collins
>
> Saw this TestMetricsSystemImpl test fail recently:
> Error Message
>  Wanted but not invoked: metricsSink.putMetrics(); -> at 
> org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirst(TestMetricsSystemImpl.java:101)
>  Actually, there were zero interactions with this mock. 
> Stacktrace
> Wanted but not invoked:
> metricsSink.putMetrics();
> -> at 
> org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirst(TestMetricsSystemImpl.java:101)
> Actually, there were zero interactions with this mock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4367) GetDataEncryptionKeyResponseProto does not handle null response

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549752#comment-13549752
 ] 

Hudson commented on HDFS-4367:
--

Integrated in Hadoop-trunk-Commit #3213 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3213/])
HDFS-4367. GetDataEncryptionKeyResponseProto does not handle null response. 
Contributed by Suresh Srinivas. (Revision 1431459)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431459
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto


> GetDataEncryptionKeyResponseProto  does not handle null response
> 
>
> Key: HDFS-4367
> URL: https://issues.apache.org/jira/browse/HDFS-4367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Blocker
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4367.patch
>
>
> GetDataEncryptionKeyResponseProto member dataEncryptionKey should be optional 
> to handle null response.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4367) GetDataEncryptionKeyResponseProto does not handle null response

2013-01-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549750#comment-13549750
 ] 

Suresh Srinivas commented on HDFS-4367:
---

Also please, if you have time review HDFS-4369 as well.

> GetDataEncryptionKeyResponseProto  does not handle null response
> 
>
> Key: HDFS-4367
> URL: https://issues.apache.org/jira/browse/HDFS-4367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Blocker
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4367.patch
>
>
> GetDataEncryptionKeyResponseProto member dataEncryptionKey should be optional 
> to handle null response.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4367) GetDataEncryptionKeyResponseProto does not handle null response

2013-01-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549746#comment-13549746
 ] 

Suresh Srinivas commented on HDFS-4367:
---

Aaron, can you please review HDFS-4364 as well.

> GetDataEncryptionKeyResponseProto  does not handle null response
> 
>
> Key: HDFS-4367
> URL: https://issues.apache.org/jira/browse/HDFS-4367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Blocker
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4367.patch
>
>
> GetDataEncryptionKeyResponseProto member dataEncryptionKey should be optional 
> to handle null response.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4367) GetDataEncryptionKeyResponseProto does not handle null response

2013-01-10 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4367:
--

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
 Release Note: Member dataEncryptionKey of the protobuf message 
GetDataEncryptionKeyResponseProto is made optional instead of required. This is 
incompatible change is not likely to affect the existing users (that are using 
HDFS FileSystem and other public APIs). 
 Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
   Status: Resolved  (was: Patch Available)

I committed the patch to trunk and branch-2. Thank you Aaron for the review.

> GetDataEncryptionKeyResponseProto  does not handle null response
> 
>
> Key: HDFS-4367
> URL: https://issues.apache.org/jira/browse/HDFS-4367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Blocker
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4367.patch
>
>
> GetDataEncryptionKeyResponseProto member dataEncryptionKey should be optional 
> to handle null response.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4367) GetDataEncryptionKeyResponseProto does not handle null response

2013-01-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549737#comment-13549737
 ] 

Suresh Srinivas commented on HDFS-4367:
---

Aaron, in the wire protocol response a field is becoming from required to 
optional. That is an incompatible change. It probably does not affect the 
correctly configured DFSClient users. However for folks who are directly 
planning to use protobuf or using protobuf, this is a change. I will add to the 
release note, this will not affect the DFSClient users.

> GetDataEncryptionKeyResponseProto  does not handle null response
> 
>
> Key: HDFS-4367
> URL: https://issues.apache.org/jira/browse/HDFS-4367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Blocker
> Attachments: HDFS-4367.patch
>
>
> GetDataEncryptionKeyResponseProto member dataEncryptionKey should be optional 
> to handle null response.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes

2013-01-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549707#comment-13549707
 ] 

Suresh Srinivas commented on HDFS-4353:
---

Colin, can you add high level description of what you have changed in this 
patch?

> Encapsulate connections to peers in Peer and PeerServer classes
> ---
>
> Key: HDFS-4353
> URL: https://issues.apache.org/jira/browse/HDFS-4353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: _02a.patch, 02b-cumulative.patch, 02c.patch, 02c.patch, 
> 02-cumulative.patch, 02d.patch, 02e.patch, 02f.patch
>
>
> Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} 
> classes.  Since many Java classes may be involved with these connections, it 
> makes sense to create a container for them.  For example, a connection to a 
> peer may have an input stream, output stream, readablebytechannel, encrypted 
> output stream, and encrypted input stream associated with it.
> This makes us less dependent on the {{NetUtils}} methods which use 
> {{instanceof}} to manipulate socket and stream states based on the runtime 
> type.  it also paves the way to introduce UNIX domain sockets which don't 
> inherit from {{java.net.Socket}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-347) DFS read performance suboptimal when client co-located on nodes with data

2013-01-10 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549700#comment-13549700
 ] 

Suresh Srinivas commented on HDFS-347:
--

bq.  Keep in mind this work has been under review here for 2-3 months now, and 
there are 100+ watchers on this JIRA, so I don't anticipate needing a lengthy 
review period like we did for other branches.
Just because there are 100+ watchers, I do not see either design or review 
comments from more than handful of people. I plan to review this in a timely 
manner. But if it requires time, I expect that such time should be given, 
instead of hurrying the reviewer.

> DFS read performance suboptimal when client co-located on nodes with data
> -
>
> Key: HDFS-347
> URL: https://issues.apache.org/jira/browse/HDFS-347
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs-client, performance
>Reporter: George Porter
>Assignee: Colin Patrick McCabe
> Attachments: all.tsv, BlockReaderLocal1.txt, HADOOP-4801.1.patch, 
> HADOOP-4801.2.patch, HADOOP-4801.3.patch, HDFS-347-016_cleaned.patch, 
> HDFS-347.016.patch, HDFS-347.017.clean.patch, HDFS-347.017.patch, 
> HDFS-347.018.clean.patch, HDFS-347.018.patch2, HDFS-347.019.patch, 
> HDFS-347.020.patch, HDFS-347.021.patch, HDFS-347.022.patch, 
> HDFS-347.024.patch, HDFS-347.025.patch, HDFS-347.026.patch, 
> HDFS-347.027.patch, HDFS-347.029.patch, HDFS-347.030.patch, 
> HDFS-347.033.patch, HDFS-347.035.patch, HDFS-347-branch-20-append.txt, 
> hdfs-347.png, hdfs-347.txt, local-reads-doc
>
>
> One of the major strategies Hadoop uses to get scalable data processing is to 
> move the code to the data.  However, putting the DFS client on the same 
> physical node as the data blocks it acts on doesn't improve read performance 
> as much as expected.
> After looking at Hadoop and O/S traces (via HADOOP-4049), I think the problem 
> is due to the HDFS streaming protocol causing many more read I/O operations 
> (iops) than necessary.  Consider the case of a DFSClient fetching a 64 MB 
> disk block from the DataNode process (running in a separate JVM) running on 
> the same machine.  The DataNode will satisfy the single disk block request by 
> sending data back to the HDFS client in 64-KB chunks.  In BlockSender.java, 
> this is done in the sendChunk() method, relying on Java's transferTo() 
> method.  Depending on the host O/S and JVM implementation, transferTo() is 
> implemented as either a sendfilev() syscall or a pair of mmap() and write().  
> In either case, each chunk is read from the disk by issuing a separate I/O 
> operation for each chunk.  The result is that the single request for a 64-MB 
> block ends up hitting the disk as over a thousand smaller requests for 64-KB 
> each.
> Since the DFSClient runs in a different JVM and process than the DataNode, 
> shuttling data from the disk to the DFSClient also results in context 
> switches each time network packets get sent (in this case, the 64-kb chunk 
> turns into a large number of 1500 byte packet send operations).  Thus we see 
> a large number of context switches for each block send operation.
> I'd like to get some feedback on the best way to address this, but I think 
> providing a mechanism for a DFSClient to directly open data blocks that 
> happen to be on the same machine.  It could do this by examining the set of 
> LocatedBlocks returned by the NameNode, marking those that should be resident 
> on the local host.  Since the DataNode and DFSClient (probably) share the 
> same hadoop configuration, the DFSClient should be able to find the files 
> holding the block data, and it could directly open them and send data back to 
> the client.  This would avoid the context switches imposed by the network 
> layer, and would allow for much larger read buffers than 64KB, which should 
> reduce the number of iops imposed by each read block operation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3272) Make it possible to state MIME type for a webhdfs OPEN operation's result

2013-01-10 Thread Jeff Markham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549688#comment-13549688
 ] 

Jeff Markham commented on HDFS-3272:


I think I had problems with setting up the Eclipse environment the first time 
which is where the import noise is coming from.

Also, as Nicholas pointed out, make changes to to WebHDFS 
(NamenodeWebHdfsMethods.get()) in addition to the ones I made to 
hadoop-hdfs-httpfs (HttpFSServer.get().  Thanks for the feedback.

> Make it possible to state MIME type for a webhdfs OPEN operation's result
> -
>
> Key: HDFS-3272
> URL: https://issues.apache.org/jira/browse/HDFS-3272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 1.0.1, 2.0.2-alpha
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HDFS-3272.patch
>
>
> when you do a GET from the browser with webhdfs, you get the file, but it 
> comes over as a binary as the browser doesn't know what type it is. Having a 
> mime mapping table and such like would be one solution, but another is simply 
> to add a {{mime}} query parameter that would provide a string to be reflected 
> back to the caller as the Content-Type header in the HTTP response.
> e.g.
> {code}
> http://ranier:50070/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> would generate a 307 redirect to the datanode, with the 
> {code}
> http://dn1:50075/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> which would then generate the result
> {code}
> 200 OK
> Content-Type:text/csv
> GATE4,eb8bd736445f415e18886ba037f84829,55000,2007-01-14,14:01:54,
> GATE4,ec58edcce1049fa665446dc1fa690638,8030803000,2007-01-14,13:52:31,
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4244) Support deleting snapshots

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549624#comment-13549624
 ] 

Hudson commented on HDFS-4244:
--

Integrated in Hadoop-Hdfs-Snapshots-Branch-build #66 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-Snapshots-Branch-build/66/])
HDFS-4244. Support snapshot deletion.  Contributed by Jing Zhao (Revision 
1430953)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430953
Files : 
* 
/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/SnapshotCommands.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-2802.txt
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileWithLink.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSnapshotPathINodes.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotTestHelper.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestNestedSnapshots.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotBlocksMap.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotRename.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshottableDirListing.java


> Support deleting snapshots
> --
>
> Key: HDFS-4244
> URL: https://issues.apache.org/jira/browse/HDFS-4244
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4244.001.patch, HDFS-4244.002.patch, 
> HDFS-4244.003.patch, HDFS-4244.004.patch, HDFS-4244.005.p

[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549623#comment-13549623
 ] 

Hudson commented on HDFS-4261:
--

Integrated in Hadoop-Hdfs-trunk #1281 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1281/])
HDFS-4261. Fix bugs in Balaner causing infinite loop and 
TestBalancerWithNodeGroup timeing out.  Contributed by Junping Du (Revision 
1430917)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430917
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithNodeGroup.java


> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win,
>  test-balancer-with-node-group-timeout.txt
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4032) Specify the charset explicitly rather than rely on the default

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549620#comment-13549620
 ] 

Hudson commented on HDFS-4032:
--

Integrated in Hadoop-Hdfs-trunk #1281 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1281/])
HDFS-4032. Specify the charset explicitly rather than rely on the default. 
Contributed by Eli Collins (Revision 1431179)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431179
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferEncryptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RollingLogsImpl.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ClusterJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RenewDelegationTokenServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsXmlLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/StatisticsEditsVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TextWriterImageVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/MD5FileUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestPathComponents.java


> Specify the charset explicitly rather than rely on the default
> --
>
> Key: HDFS-4032
> URL: https://issues.apache.org/jira/browse/HDFS-4032
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-4032.txt, hdfs-4032.txt
>
>
> Findbugs 2 warns about relying on the default Java charset instead of 
> specifying it explicitly. Given that we're porting Hadoop to different 
> platforms it's better to be explicit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4363) Combine PBHelper and HdfsProtoUtil and remove redundant methods

2013-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13549619#comment-13549619
 ] 

Hudson commented on HDFS-4363:
--

Integrated in Hadoop-Hdfs-trunk #1281 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1281/])
HDFS-4363. Combine PBHelper and HdfsProtoUtil and remove redundant methods. 
Contributed by Suresh Srinivas. (Revision 1431088)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1431088
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferEncryptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Receiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/Sender.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocol/TestHdfsProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java


> Combine PBHelper and HdfsProtoUtil and remove redundant methods
> ---
>
> Key: HDFS-4363
> URL: https://issues.apache.org/jira/browse/HDFS-4363
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4363.patch, HDFS-4363.patch, HDFS-4363.patch, 
> HDFS-4363.patch
>
>
> There are many methods overlapping between PBHelper and HdfsProtoUtil. This 
> jira combines these two helper classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >