[jira] Commented: (HDFS-1627) Fix NullPointerException in Secondary NameNode

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996738#comment-12996738
 ] 

Hadoop QA commented on HDFS-1627:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12471100/NPE_SNN.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.hdfs.TestFileConcurrentReader

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/173//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/173//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/173//console

This message is automatically generated.

 Fix NullPointerException in Secondary NameNode
 --

 Key: HDFS-1627
 URL: https://issues.apache.org/jira/browse/HDFS-1627
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.23.0

 Attachments: NPE_SNN.patch


 Secondary NameNode should not reset namespace if no new image is downloaded 
 from the primary NameNode.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1604) add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT web-consoles

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996740#comment-12996740
 ] 

Hadoop QA commented on HDFS-1604:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12469817/ha-hdfs-02.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  
org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks
  org.apache.hadoop.hdfs.TestFileConcurrentReader

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/174//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/174//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/174//console

This message is automatically generated.

 add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT 
 web-consoles
 --

 Key: HDFS-1604
 URL: https://issues.apache.org/jira/browse/HDFS-1604
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 0.23.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: ha-hdfs-01.patch, ha-hdfs-02.patch, ha-hdfs.patch


 This JIRA is for the HDFS portion of HADOOP-7119

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1583) Improve backup-node sync performance by wrapping RPC parameters

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996742#comment-12996742
 ] 

Hadoop QA commented on HDFS-1583:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12468458/test-rpc.diff
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 9 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

-1 release audit.  The applied patch generated 3 release audit warnings 
(more than the trunk's current 0 warnings).

+1 core tests.  The patch passed core unit tests.

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/176//testReport/
Release audit warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/176//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/176//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/176//console

This message is automatically generated.

 Improve backup-node sync performance by wrapping RPC parameters
 ---

 Key: HDFS-1583
 URL: https://issues.apache.org/jira/browse/HDFS-1583
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Liyin Liang
Assignee: Liyin Liang
 Fix For: 0.23.0

 Attachments: HDFS-1583-1.patch, HDFS-1583-2.patch, test-rpc.diff


 The journal edit records are sent by the active name-node to the backup-node 
 with RPC:
 {code:}
   public void journal(NamenodeRegistration registration,
   int jAction,
   int length,
   byte[] records) throws IOException;
 {code}
 During the name-node throughput benchmark, the size of byte array _records_ 
 is around *8000*.  Then the serialization and deserialization is 
 time-consuming. I wrote a simple application to test RPC with byte array 
 parameter. When the size got to 8000, each RPC call need about 6 ms. While 
 name-node sync 8k byte to local disk only need  0.3~0.4ms.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1544) Ivy resolve force mode should be turned off by default

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996745#comment-12996745
 ] 

Hadoop QA commented on HDFS-1544:
-

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12466510/hdfs-1544-trunk-v2.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed core unit tests.

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/178//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/178//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/178//console

This message is automatically generated.

 Ivy resolve force mode should be turned off by default
 --

 Key: HDFS-1544
 URL: https://issues.apache.org/jira/browse/HDFS-1544
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Luke Lu
Assignee: Luke Lu
 Attachments: hdfs-1544-trunk-v1.patch, hdfs-1544-trunk-v2.patch


 cf. HADOOP-7068

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1584) Need to check TGT and renew if needed when fetching delegation tokens using HFTP

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996748#comment-12996748
 ] 

Hadoop QA commented on HDFS-1584:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12468709/h1584-01.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.cli.TestHDFSCLI
  org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
  org.apache.hadoop.hdfs.TestFileConcurrentReader

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/175//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/175//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/175//console

This message is automatically generated.

 Need to check TGT and renew if needed when fetching delegation tokens using 
 HFTP
 

 Key: HDFS-1584
 URL: https://issues.apache.org/jira/browse/HDFS-1584
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Reporter: Kan Zhang
Assignee: Kan Zhang
 Attachments: h1584-01.patch


 Currently, there is no checking on TGT validity when calling 
 getDelegationToken(). The TGT may expire and the call will fail.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1541) Not marking datanodes dead When namenode in safemode

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996758#comment-12996758
 ] 

Hadoop QA commented on HDFS-1541:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12467496/deadnodescheck.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.hdfs.TestFileConcurrentReader

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/180//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/180//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/180//console

This message is automatically generated.

 Not marking datanodes dead When namenode in safemode
 

 Key: HDFS-1541
 URL: https://issues.apache.org/jira/browse/HDFS-1541
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.23.0

 Attachments: deadnodescheck.patch


 In a big cluster, when namenode starts up,  it takes a long time for namenode 
 to process block reports from all datanodes. Because heartbeats processing 
 get delayed, some datanodes are erroneously marked as dead, then later on 
 they have to register again, thus wasting time.
 It would speed up starting time if the checking of dead nodes is disabled 
 when namenode in safemode.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1543) Reduce dev. cycle time by moving system testing artifacts from default build and push to maven for HDFS

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996768#comment-12996768
 ] 

Hadoop QA commented on HDFS-1543:
-

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12466656/hdfs-1543-trunk-v2.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.cli.TestHDFSCLI
  org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
  org.apache.hadoop.hdfs.TestFileConcurrentReader

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/179//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/179//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/179//console

This message is automatically generated.

 Reduce dev. cycle time by moving system testing artifacts from default build 
 and push to maven for HDFS
 ---

 Key: HDFS-1543
 URL: https://issues.apache.org/jira/browse/HDFS-1543
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Arun C Murthy
Assignee: Luke Lu
 Attachments: HDFS-1543.patch, hdfs-1543-trunk-v1.patch, 
 hdfs-1543-trunk-v2.patch


 The current build always generates system testing artifacts and pushes them 
 to Maven. Most developers have no need for these artifacts and no users need 
 them. 
 Also, fault injection tests seems to be running multiple times which 
 increases the length of testing.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1508) Ability to do savenamespace without being in safemode

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996781#comment-12996781
 ] 

Hadoop QA commented on HDFS-1508:
-

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12469107/savenamespaceWithoutSafemode5.txt
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 21 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/182//console

This message is automatically generated.

 Ability to do savenamespace without being in safemode
 -

 Key: HDFS-1508
 URL: https://issues.apache.org/jira/browse/HDFS-1508
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Attachments: savenamespaceWithoutSafemode.txt, 
 savenamespaceWithoutSafemode2.txt, savenamespaceWithoutSafemode3.txt, 
 savenamespaceWithoutSafemode4.txt, savenamespaceWithoutSafemode5.txt


 In the current code, the administrator can run savenamespace only after 
 putting the namenode in safemode. This means that applications that are 
 writing to HDFS encounters errors because the NN is in safemode. We would 
 like to allow saveNamespace even when the namenode is not in safemode.
 The savenamespace command already acquires the FSNamesystem writelock. There 
 is no need to require that the namenode is in safemode too.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1497) Write pipeline sequence numbers should be sequential with no skips or duplicates

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996782#comment-12996782
 ] 

Hadoop QA commented on HDFS-1497:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12468045/hdfs-1497.txt
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/183//console

This message is automatically generated.

 Write pipeline sequence numbers should be sequential with no skips or 
 duplicates
 

 Key: HDFS-1497
 URL: https://issues.apache.org/jira/browse/HDFS-1497
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.20-append, 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hdfs-1497.txt, hdfs-1497.txt, hdfs-1497.txt, 
 hdfs-1497.txt, hdfs-1497.txt


 In HDFS-895 we discovered that multiple hflush() calls in a row without 
 intervening writes could cause a skip in sequence number. This doesn't seem 
 to have any direct consequences, but we should maintain and assert the 
 invariant that sequence numbers have no gaps or duplicates.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1477) Make NameNode Reconfigurable.

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996788#comment-12996788
 ] 

Hadoop QA commented on HDFS-1477:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12466342/HDFS-1477.3.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.hdfs.TestFileConcurrentReader

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/184//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/184//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/184//console

This message is automatically generated.

 Make NameNode Reconfigurable.
 -

 Key: HDFS-1477
 URL: https://issues.apache.org/jira/browse/HDFS-1477
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.23.0
Reporter: Patrick Kling
Assignee: Patrick Kling
 Fix For: 0.23.0

 Attachments: HDFS-1477.2.patch, HDFS-1477.3.patch, HDFS-1477.patch


 Modify NameNode to implement the interface Reconfigurable proposed in 
 HADOOP-7001. This would allow us to change certain configuration properties 
 without restarting the name node.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1521) Persist transaction ID on disk between NN restarts

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996789#comment-12996789
 ] 

Hadoop QA commented on HDFS-1521:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12471206/HDFS-1521.diff
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 12 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.hdfs.server.datanode.TestBlockReport
  org.apache.hadoop.hdfs.TestFileAppend2

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/181//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/181//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/181//console

This message is automatically generated.

 Persist transaction ID on disk between NN restarts
 --

 Key: HDFS-1521
 URL: https://issues.apache.org/jira/browse/HDFS-1521
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: HDFS-1521.diff, HDFS-1521.diff, hdfs-1521.3.txt, 
 hdfs-1521.4.txt, hdfs-1521.5.txt, hdfs-1521.txt, hdfs-1521.txt


 For HDFS-1073 and other future work, we'd like to have the concept of a 
 transaction ID that is persisted on disk with the image/edits. We already 
 have this concept in the NameNode but it resets to 0 on restart. We can also 
 use this txid to replace the _checkpointTime_ field, I believe.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1348) Improve NameNode reponsiveness while it is checking if datanode decommissions are complete

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996793#comment-12996793
 ] 

Hadoop QA commented on HDFS-1348:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12455885/decomissionImp2.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/189//console

This message is automatically generated.

 Improve NameNode reponsiveness while it is checking if datanode decommissions 
 are complete
 --

 Key: HDFS-1348
 URL: https://issues.apache.org/jira/browse/HDFS-1348
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Attachments: decomissionImp1.patch, decomissionImp2.patch, 
 decommission.patch, decommission1.patch


 NameNode normally is busy all the time. Its log is full of activities every 
 second. But once for a while, NameNode seems to pause for more than 10 
 seconds without doing anything, leaving a blank in its log even though no 
 garbage collection is happening.  All other requests to NameNode are blocked 
 when this is happening.
 One culprit is DecommionManager. Its monitor holds the fsynamesystem lock 
 during the whole process of checking if decomissioning DataNodes are finished 
 or not, during which it checks every block of up to a default of 5 datanodes.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1391) Exiting safemode takes a long time when there are lots of blocks in the HDFS

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996791#comment-12996791
 ] 

Hadoop QA commented on HDFS-1391:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12469119/excessReplicas3.txt
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 21 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/187//console

This message is automatically generated.

 Exiting safemode takes a long time when there are lots of blocks in the HDFS
 

 Key: HDFS-1391
 URL: https://issues.apache.org/jira/browse/HDFS-1391
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Attachments: excessReplicas.1_trunk.txt, excessReplicas2.txt, 
 excessReplicas3.txt


 When the namenode decides to exit safemode,  it acquires the FSNamesystem 
 lock and then iterates over all blocks in the blocksmap to determine if any 
 block has any excess replicas. This call takes upwards of 5 minutes on a 
 cluster that has 100 million blocks. This delays namenode restart to a good 
 extent.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1464) Fix reporting of 2NN address when dfs.secondary.http.address is default (wildcard)

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996790#comment-12996790
 ] 

Hadoop QA commented on HDFS-1464:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12457612/hdfs-1464.txt
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/185//console

This message is automatically generated.

 Fix reporting of 2NN address when dfs.secondary.http.address is default 
 (wildcard)
 --

 Key: HDFS-1464
 URL: https://issues.apache.org/jira/browse/HDFS-1464
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hdfs-1464.txt


 HDFS-1080 broke the way that the 2NN identifies its own hostname to the NN 
 during checkpoint upload. It used to use the local hostname, which as 
 HDFS-1080 pointed out was error prone if it had multiple interfaces, etc. But 
 now, with the default setting of dfs.secondary.http.address, the 2NN reports 
 0.0.0.0, which won't work either.
 We should look for the wildcard bind address and use the local hostname in 
 that case, like we used to.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1418) DFSClient Uses Deprecated mapred.task.id Configuration Key Causing Unecessary Warning Messages

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996792#comment-12996792
 ] 

Hadoop QA commented on HDFS-1418:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12455481/HDFS-1418.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/186//console

This message is automatically generated.

 DFSClient Uses Deprecated mapred.task.id Configuration Key Causing 
 Unecessary Warning Messages
 

 Key: HDFS-1418
 URL: https://issues.apache.org/jira/browse/HDFS-1418
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.22.0
Reporter: Ranjit Mathew
Priority: Minor
 Attachments: HDFS-1418.patch


 Every invocation of the hadoop fs command leads to an unnecessary warning 
 like the following:
 {noformat}
 $ $HADOOP_HOME/bin/hadoop fs -ls /
 10/09/24 15:10:23 WARN conf.Configuration: mapred.task.id is deprecated. 
 Instead, use mapreduce.task.attempt.id
 {noformat}
 This is easily fixed by updating 
 src/java/org/apache/hadoop/hdfs/DFSClient.java.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1339) NameNodeMetrics should use MetricsTimeVaryingLong

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996794#comment-12996794
 ] 

Hadoop QA commented on HDFS-1339:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12451731/HDFS-1339.txt
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The patch appears to cause tar ant target to fail.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:


-1 contrib tests.  The patch failed contrib unit tests.

-1 system test framework.  The patch failed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/190//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/190//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/190//console

This message is automatically generated.

 NameNodeMetrics should use MetricsTimeVaryingLong 
 --

 Key: HDFS-1339
 URL: https://issues.apache.org/jira/browse/HDFS-1339
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Scott Chen
Assignee: Scott Chen
Priority: Minor
 Attachments: HDFS-1339.txt


 NameNodeMetrics uses MetricsTimeVaryingInt. We see that FileInfoOps and 
 GetBlockLocations overflow in our cluster.
 Using MetricsTimeVaryingLong will easily solve this problem.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1362) Provide volume management functionality for DataNode

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996805#comment-12996805
 ] 

Hadoop QA commented on HDFS-1362:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12471190/HDFS-1362.7.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 4 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.hdfs.TestFileConcurrentReader
  org.apache.hadoop.hdfs.TestLargeBlock

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/188//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/188//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/188//console

This message is automatically generated.

 Provide volume management functionality for DataNode
 

 Key: HDFS-1362
 URL: https://issues.apache.org/jira/browse/HDFS-1362
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node
Affects Versions: 0.23.0
Reporter: Wang Xu
Assignee: Wang Xu
 Fix For: 0.23.0

 Attachments: DataNode Volume Refreshment in HDFS-1362.pdf, 
 HDFS-1362.4_w7001.txt, HDFS-1362.5.patch, HDFS-1362.6.patch, 
 HDFS-1362.7.patch, HDFS-1362.txt, Provide_volume_management_for_DN_v1.pdf


 The current management unit in Hadoop is a node, i.e. if a node failed, it 
 will be kicked out and all the data on the node will be replicated.
 As almost all SATA controller support hotplug, we add a new command line 
 interface to datanode, thus it can list, add or remove a volume online, which 
 means we can change a disk without node decommission. Moreover, if the failed 
 disk still readable and the node has enouth space, it can migrate data on the 
 disks to other disks in the same node.
 A more detailed design document will be attached.
 The original version in our lab is implemented against 0.20 datanode 
 directly, and is it better to implemented it in contrib? Or any other 
 suggestion?

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1273) Handle disk failure when writing new blocks on datanode

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996806#comment-12996806
 ] 

Hadoop QA commented on HDFS-1273:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12448294/HDFS_1273.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 9 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/192//console

This message is automatically generated.

 Handle disk failure when writing new blocks on datanode
 ---

 Key: HDFS-1273
 URL: https://issues.apache.org/jira/browse/HDFS-1273
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.21.0
Reporter: Jeff Zhang
Assignee: Jeff Zhang
 Attachments: HDFS_1273.patch


 This issues relates to HDFS-457, in the patch of HDFS-457 only disk failure 
 when reading is handled. This jira is to handle the disk failure when writing 
 new blocks on data node.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1152) appendFile does not recheck lease in second synchronized block

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996807#comment-12996807
 ] 

Hadoop QA commented on HDFS-1152:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/1249/hdfs-1152.txt
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/194//console

This message is automatically generated.

 appendFile does not recheck lease in second synchronized block
 --

 Key: HDFS-1152
 URL: https://issues.apache.org/jira/browse/HDFS-1152
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.20-append, 0.21.0, 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: appendFile-recheck-lease.txt, hdfs-1152.txt


 FSN.appendFile is made up of two synchronized sections. The second section 
 assumes that the file has not gotten modified during the unsynchronized part 
 in between. We should recheck the lease in the second block.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1076) HFDS CLI error tests fail with Avro RPC

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996815#comment-12996815
 ] 

Hadoop QA commented on HDFS-1076:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12441320/HDFS-1076.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 26 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/196//console

This message is automatically generated.

 HFDS CLI error tests fail with Avro RPC
 ---

 Key: HDFS-1076
 URL: https://issues.apache.org/jira/browse/HDFS-1076
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Doug Cutting
Assignee: Doug Cutting
 Attachments: HDFS-1076.patch, HDFS-1076.patch


 Some HDFS command-line tests (TestHDFSCLI) fail when using AvroRpcEngine 
 because the error string does not match.  Calling getMessage() on a remote 
 exception thrown by WritableRpcEngine produces a string that contains the 
 exception name followed by its getMessage(), while exceptions thrown by 
 AvroRpcEngine contain just the getMessage() string of the original exception.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1300) Decommissioning nodes does not increase replication priority

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996814#comment-12996814
 ] 

Hadoop QA commented on HDFS-1300:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12459401/HDFS-1300.3.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 5 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.cli.TestHDFSCLI
  org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
  org.apache.hadoop.hdfs.TestFileConcurrentReader

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/191//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/191//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/191//console

This message is automatically generated.

 Decommissioning nodes does not increase replication priority
 

 Key: HDFS-1300
 URL: https://issues.apache.org/jira/browse/HDFS-1300
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.1, 0.20.2, 0.20.3, 0.20-append, 0.21.0, 0.22.0
Reporter: Dmytro Molkov
Assignee: Dmytro Molkov
 Attachments: HDFS-1300.2.patch, HDFS-1300.3.patch, HDFS-1300.patch


 Currently when you decommission a node each block is only inserted into 
 neededReplications if it is not there yet. This causes a problem of a block 
 sitting in a low priority queue when all replicas sit on the nodes being 
 decommissioned.
 The common usecase for decommissioning nodes for us is proactively exclude 
 them before they went bad, so it would be great to get the blocks at risk 
 onto the live datanodes as quickly as possible.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-957) FSImage layout version should be only once file is complete

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996820#comment-12996820
 ] 

Hadoop QA commented on HDFS-957:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12435248/hdfs-957.txt
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/198//console

This message is automatically generated.

 FSImage layout version should be only once file is complete
 ---

 Key: HDFS-957
 URL: https://issues.apache.org/jira/browse/HDFS-957
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hdfs-957.txt


 Right now, the FSImage save code writes the LAYOUT_VERSION at the head of the 
 file, along with some other headers, and then dumps the directory into the 
 file. Instead, it should write a special IMAGE_IN_PROGRESS entry for the 
 layout version, dump all of the data, then seek back to the head of the file 
 to write the proper LAYOUT_VERSION. This would make it very easy to detect 
 the case where the FSImage save got interrupted.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1244) Misc improvements to TestFileAppend2

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996819#comment-12996819
 ] 

Hadoop QA commented on HDFS-1244:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12467233/hdfs-1244.txt
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.hdfs.TestDFSRollback
  org.apache.hadoop.hdfs.TestFileConcurrentReader

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/193//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/193//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/193//console

This message is automatically generated.

 Misc improvements to TestFileAppend2
 

 Key: HDFS-1244
 URL: https://issues.apache.org/jira/browse/HDFS-1244
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 0.20-append, 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hdfs-1244-0.20-append.txt, hdfs-1244.txt


 I've made a bunch of improvements to TestFileAppend2:
  - Now has a main() with various command line options to change the workload 
 (number of DNs, number of threads, etc)
  - Sleeps for less time in between operations to catch races around 
 close/reopen
  - Updates to Junit 4 style, adds timeouts
  - Improves error mesages on failure

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-964) hdfs-default.xml shouldn't use hadoop.tmp.dir for dfs.data.dir (0.20 and lower) / dfs.datanode.dir (0.21 and up)

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996831#comment-12996831
 ] 

Hadoop QA commented on HDFS-964:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12442711/HDFS-964.txt
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.hdfs.TestFileConcurrentReader

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/197//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/197//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/197//console

This message is automatically generated.

 hdfs-default.xml shouldn't use hadoop.tmp.dir for dfs.data.dir (0.20 and 
 lower) / dfs.datanode.dir (0.21 and up)
 

 Key: HDFS-964
 URL: https://issues.apache.org/jira/browse/HDFS-964
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.2
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HDFS-964.txt


 This question/problem pops up all the time.  Can we *please* eliminate 
 hadoop.tmp.dir's usage from the default in dfs.data.dir.  It is confusing to 
 new people and results in all sorts of weird accidents.  If we want the same 
 value, fine, but there are a lot of implied things by the variable re-use.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1105) Balancer improvement

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996835#comment-12996835
 ] 

Hadoop QA commented on HDFS-1105:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12456072/HDFS-1105.4.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.cli.TestHDFSCLI
  org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
  org.apache.hadoop.hdfs.TestFileConcurrentReader

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/195//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/195//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/195//console

This message is automatically generated.

 Balancer improvement
 

 Key: HDFS-1105
 URL: https://issues.apache.org/jira/browse/HDFS-1105
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer
Reporter: Dmytro Molkov
Assignee: Dmytro Molkov
 Fix For: 0.23.0

 Attachments: HDFS-1105.2.patch, HDFS-1105.3.patch, HDFS-1105.4.patch, 
 HDFS-1105.patch


 We were seeing some weird issues with the balancer in our cluster:
 1) it can get stuck during an iteration and only restarting it helps
 2) the iterations are highly inefficient. With 20 minutes iteration it moves 
 7K blocks a minute for the first 6 minutes and hundreds of blocks in the next 
 14 minutes
 3) it can hit namenode and the network pretty hard
 A few improvements we came up with as a result:
 Making balancer more deterministic in terms of running time of iteration, 
 improving the efficiency and making the load configurable:
 Make many of the constants configurable command line parameters: Iteration 
 length, number of blocks to move in parallel to a given node and in cluster 
 overall.
 Terminate transfers that are still in progress after iteration is over.
 Previously iteration time was the time window in which the balancer was 
 scheduling the moves and then it would wait for the moves to finish 
 indefinitely. Each scheduling task can run up to iteration time or even 
 longer. This means if you have too many of them and they are long your actual 
 iterations are longer than 20 minutes. Now each scheduling task has a time of 
 the start of iteration and it should schedule the moves only if it did not 
 run out of time. So the tasks that have started after the iteration is over 
 will not schedule any moves.
 The number of move threads and dispatch threads is configurable so that 
 depending on the load of the cluster you can run it slower.
 I will attach a patch, please let me know what you think and what can be done 
 better.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-941) Datanode xceiver protocol should allow reuse of a connection

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996836#comment-12996836
 ] 

Hadoop QA commented on HDFS-941:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12443322/HDFS-941-4.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 15 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/199//console

This message is automatically generated.

 Datanode xceiver protocol should allow reuse of a connection
 

 Key: HDFS-941
 URL: https://issues.apache.org/jira/browse/HDFS-941
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node, hdfs client
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: bc Wong
 Attachments: HDFS-941-1.patch, HDFS-941-2.patch, HDFS-941-3.patch, 
 HDFS-941-3.patch, HDFS-941-4.patch


 Right now each connection into the datanode xceiver only processes one 
 operation.
 In the case that an operation leaves the stream in a well-defined state (eg a 
 client reads to the end of a block successfully) the same connection could be 
 reused for a second operation. This should improve random read performance 
 significantly.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-923) libhdfs hdfs_read example uses hdfsRead wrongly

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996837#comment-12996837
 ] 

Hadoop QA commented on HDFS-923:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12440608/hdfs-923.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/201//console

This message is automatically generated.

 libhdfs hdfs_read example uses hdfsRead wrongly
 ---

 Key: HDFS-923
 URL: https://issues.apache.org/jira/browse/HDFS-923
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/libhdfs
Reporter: Ruyue Ma
Assignee: Ruyue Ma
 Attachments: hdfs-923.patch


 In the examples of libhdfs,  the hdfs_read.c uses hdfsRead wrongly. 
 {noformat}
 // read from the file
 tSize curSize = bufferSize;
 for (; curSize == bufferSize;) {
 curSize = hdfsRead(fs, readFile, (void*)buffer, curSize);
 }
 {noformat} 
 the condition curSize == bufferSize has problem.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-1312) Re-balance disks within a Datanode

2011-02-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996842#comment-12996842
 ] 

Steve Loughran commented on HDFS-1312:
--

I think having a remote web view is useful in two ways
-lets people see the basics of what is going in within the entire cluster (yes, 
that will need some aggregation eventually)
-lets you write tests that hit the status pages and so verify that the 
rebalancing worked. 

 Re-balance disks within a Datanode
 --

 Key: HDFS-1312
 URL: https://issues.apache.org/jira/browse/HDFS-1312
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node
Reporter: Travis Crawford

 Filing this issue in response to ``full disk woes`` on hdfs-user.
 Datanodes fill their storage directories unevenly, leading to situations 
 where certain disks are full while others are significantly less used. Users 
 at many different sites have experienced this issue, and HDFS administrators 
 are taking steps like:
 - Manually rebalancing blocks in storage directories
 - Decomissioning nodes  later readding them
 There's a tradeoff between making use of all available spindles, and filling 
 disks at the sameish rate. Possible solutions include:
 - Weighting less-used disks heavier when placing new blocks on the datanode. 
 In write-heavy environments this will still make use of all spindles, 
 equalizing disk use over time.
 - Rebalancing blocks locally. This would help equalize disk use as disks are 
 added/replaced in older cluster nodes.
 Datanodes should actively manage their local disk so operator intervention is 
 not needed.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-925) Make it harder to accidentally close a shared DFSClient

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996845#comment-12996845
 ] 

Hadoop QA commented on HDFS-925:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12470272/HDFS-925.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 5 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.hdfs.TestFileConcurrentReader

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/200//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/200//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/200//console

This message is automatically generated.

 Make it harder to accidentally close a shared DFSClient
 ---

 Key: HDFS-925
 URL: https://issues.apache.org/jira/browse/HDFS-925
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.21.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Attachments: HADOOP-5933.patch, HADOOP-5933.patch, HDFS-925.patch, 
 HDFS-925.patch, HDFS-925.patch, HDFS-925.patch


 Every so often I get stack traces telling me that DFSClient is closed, 
 usually in {{org.apache.hadoop.hdfs.DFSClient.checkOpen() }} . The root cause 
 of this is usually that one thread has closed a shared fsclient while another 
 thread still has a reference to it. If the other thread then asks for a new 
 client it will get one -and the cache repopulated- but if has one already, 
 then I get to see a stack trace. 
 It's effectively a race condition between clients in different threads. 

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-827) Additional unit tests for FSDataset

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996846#comment-12996846
 ] 

Hadoop QA commented on HDFS-827:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12467234/hdfs-827.txt
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 11 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
  org.apache.hadoop.hdfs.TestFileConcurrentReader

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/202//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/202//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/202//console

This message is automatically generated.

 Additional unit tests for FSDataset
 ---

 Key: HDFS-827
 URL: https://issues.apache.org/jira/browse/HDFS-827
 Project: Hadoop HDFS
  Issue Type: Test
  Components: data-node, test
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hdfs-827.txt


 FSDataset doesn't currently have a unit-test that tests it in isolation of 
 the DN or a cluster. A test specifically for this class will be helpful for 
 developing HDFS-788

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-411) parameter dfs.replication is not reflected when put file into hadoop with fuse-dfs

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996849#comment-12996849
 ] 

Hadoop QA commented on HDFS-411:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12397226/HADOOP-4877.txt.trunk
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/206//console

This message is automatically generated.

 parameter dfs.replication is not reflected when put file into hadoop with 
 fuse-dfs
 --

 Key: HDFS-411
 URL: https://issues.apache.org/jira/browse/HDFS-411
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: os:centos5.2
 cpu:amd64
 hadoop0.19.0
Reporter: zhu weimin
 Attachments: HADOOP-4877.txt.0.19, HADOOP-4877.txt.trunk


 the $HADOOP_CONF_DIR is exist in the $CLASSPATH
 and the dfs.replication is set to 3 with the following in hadoop-site.xml
 property
   namedfs.replication/name
   value1/value
 /property
 The file's replication is 3 when it be put into hadoop.
 I think the reason is :
 there is a hardcoding in the src\contrib\fuse-dfs\src\fuse_dfs.c 
 line 1337
 if ((fh-hdfsFH = (hdfsFile)hdfsOpenFile(fh-fs, path, flags,  0, 3, 0)) == 
 NULL) {
 line 1591
 if ((file = (hdfsFile)hdfsOpenFile(userFS, path, flags,  0, 3, 0)) == NULL) {
 the fifth parameter is a hardcoding when call the function of hdfsOpenFile.
 It is should set to 0. It is as follows.
 line 1337
 if ((fh-hdfsFH = (hdfsFile)hdfsOpenFile(fh-fs, path, flags,  0, 0, 0)) == 
 NULL) {
 line 1591
 if ((file = (hdfsFile)hdfsOpenFile(userFS, path, flags,  0, 0, 0)) == NULL) {

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-453) XML-based metrics as JSP servlet for NameNode

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996848#comment-12996848
 ] 

Hadoop QA commented on HDFS-453:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12446660/HDFS-453.7.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/205//console

This message is automatically generated.

 XML-based metrics as JSP servlet for NameNode
 -

 Key: HDFS-453
 URL: https://issues.apache.org/jira/browse/HDFS-453
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: name-node
Affects Versions: 0.21.0, 0.22.0
Reporter: Aaron Kimball
Assignee: Aaron Kimball
 Attachments: HDFS-453.2.patch, HDFS-453.3.patch, HDFS-453.4.patch, 
 HDFS-453.5.patch, HDFS-453.6.patch, HDFS-453.7.patch, HDFS-453.patch, 
 dfshealth.xml.jspx, example-dfshealth.xml


 In HADOOP-4559, a general REST API for reporting metrics was proposed but 
 work seems to have stalled. In the interim, we have a simple XML translation 
 of the existing NameNode status page which provides the same metrics as the 
 human-readable page. This is a relatively lightweight addition to provide 
 some machine-understandable metrics reporting.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HDFS-789) Add conf to classpath in start_thrift_server.sh

2011-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996857#comment-12996857
 ] 

Hadoop QA commented on HDFS-789:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12426385/HDFS_798.patch
  against trunk revision 1072023.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these core unit tests:
  org.apache.hadoop.cli.TestHDFSCLI
  org.apache.hadoop.hdfs.server.balancer.TestBalancer
  org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery

-1 contrib tests.  The patch failed contrib unit tests.

+1 system test framework.  The patch passed system test framework compile.

Test results: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/203//testReport/
Findbugs warnings: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/203//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: 
https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/203//console

This message is automatically generated.

 Add conf to classpath in start_thrift_server.sh
 ---

 Key: HDFS-789
 URL: https://issues.apache.org/jira/browse/HDFS-789
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: contrib/thriftfs
Affects Versions: 0.22.0
Reporter: Jeff Zhang
 Attachments: HDFS_798.patch


 In the current script start_thrift_server.sh, the conf folder is not in 
 classpath, so when user start the thrift server, it actually use the local 
 file system.
 So I create this issue to put the hdfs configuration file to classpath.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira