[jira] Commented: (HDFS-173) Recursively deleting a directory with millions of files makes NameNode unresponsive for other commands until the deletion completes

2009-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750313#action_12750313
 ] 

Hadoop QA commented on HDFS-173:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12418318/HDFS-173.3.patch
  against trunk revision 810337.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/5/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/5/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/5/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/5/console

This message is automatically generated.

 Recursively deleting a directory with millions of files makes NameNode 
 unresponsive for other commands until the deletion completes
 ---

 Key: HDFS-173
 URL: https://issues.apache.org/jira/browse/HDFS-173
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 0.21.0

 Attachments: HDFS-173.1.patch, HDFS-173.2.patch, HDFS-173.3.patch, 
 HDFS-173.patch


 Delete a directory with millions of files. This could take several minutes 
 (observed 12 mins for 9 million files). While the operation is in progress 
 FSNamesystem lock is held and the requests from clients are not handled until 
 deletion completes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-580) Name node will exit safe mode w/0 blocks even if data nodes are broken

2009-09-02 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750354#action_12750354
 ] 

Steve Loughran commented on HDFS-580:
-

Maybe the NN should do a health check by trying to write and then read a file 
before declaring itself live

 Name node will exit safe mode w/0 blocks even if data nodes are broken
 --

 Key: HDFS-580
 URL: https://issues.apache.org/jira/browse/HDFS-580
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Allen Wittenauer

 If one brings up a freshly formatted name node against older data nodes with 
 an incompatible storage id (such that the datanodes fail with Directory 
 /mnt/u001/dfs-data is in an inconsistent state: is incompatible with 
 others.), the name node will still come out of safe mode.  Writes will 
 partially succeed--entries are created, but all are zero length.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-456) Problems with dfs.name.edits.dirs as URI

2009-09-02 Thread Luca Telloli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luca Telloli updated HDFS-456:
--

Status: Open  (was: Patch Available)

 Problems with dfs.name.edits.dirs as URI
 

 Key: HDFS-456
 URL: https://issues.apache.org/jira/browse/HDFS-456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.21.0
Reporter: Konstantin Shvachko
Assignee: Luca Telloli
 Fix For: 0.21.0

 Attachments: failing-tests.zip, HDFS-456.patch, HDFS-456.patch, 
 HDFS-456.patch, HDFS-456.patch, HDFS-456.patch, HDFS-456.patch, 
 HDFS-456.patch, HDFS-456.patch


 There are several problems with recent commit of HDFS-396.
 # It does not work with default configuration file:///. Throws 
 {{IllegalArgumentException}}.
 # *ALL* hdfs tests fail on Windows because C:\mypath is treated as an 
 illegal URI. Backward compatibility is not provided.
 # {{IllegalArgumentException}} should not be thrown within hdfs code because 
 it is a {{RuntimException}}. We should throw {{IOException}} instead. This 
 was recently discussed in another jira.
 # Why do we commit patches without running unit tests and test-patch? This is 
 the minimum requirement for a patch to qualify as committable, right?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-587) Test programs support only default queue.

2009-09-02 Thread Sreekanth Ramakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750379#action_12750379
 ] 

Sreekanth Ramakrishnan commented on HDFS-587:
-

Currently on the MAPREDUCE-945 we have made the corresponding classes which 
{{ProgramDriver}} uses to extend {{Configured}} and implement {{Tool}}. Also 
the configuration used to launch job are construced not by {{new 
Configuration()}} rather thro' {{getConf()}} method which is implemented in 
{{Configured}} and we use {{ToolRunner.run()}} method to launch program taking 
care of generic options.

 Test programs support only default queue.
 -

 Key: HDFS-587
 URL: https://issues.apache.org/jira/browse/HDFS-587
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Sreekanth Ramakrishnan

 Following test programs  always run on default queue even when other queues 
 are passed as job parameter.
 DFSCIOTest
 DistributedFSCheck
 TestDFSIO
 Filebench
 Loadgen
 Nnbench

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-456) Problems with dfs.name.edits.dirs as URI

2009-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750427#action_12750427
 ] 

Hadoop QA commented on HDFS-456:


+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12417997/HDFS-456.patch
  against trunk revision 810337.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/6/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/6/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/6/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/6/console

This message is automatically generated.

 Problems with dfs.name.edits.dirs as URI
 

 Key: HDFS-456
 URL: https://issues.apache.org/jira/browse/HDFS-456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.21.0
Reporter: Konstantin Shvachko
Assignee: Luca Telloli
 Fix For: 0.21.0

 Attachments: failing-tests.zip, HDFS-456.patch, HDFS-456.patch, 
 HDFS-456.patch, HDFS-456.patch, HDFS-456.patch, HDFS-456.patch, 
 HDFS-456.patch, HDFS-456.patch


 There are several problems with recent commit of HDFS-396.
 # It does not work with default configuration file:///. Throws 
 {{IllegalArgumentException}}.
 # *ALL* hdfs tests fail on Windows because C:\mypath is treated as an 
 illegal URI. Backward compatibility is not provided.
 # {{IllegalArgumentException}} should not be thrown within hdfs code because 
 it is a {{RuntimException}}. We should throw {{IOException}} instead. This 
 was recently discussed in another jira.
 # Why do we commit patches without running unit tests and test-patch? This is 
 the minimum requirement for a patch to qualify as committable, right?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-581) Introduce an iterator over blocks in the block report array.

2009-09-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750460#action_12750460
 ] 

Hudson commented on HDFS-581:
-

Integrated in Hadoop-Hdfs-trunk #70 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/70/])
. Introduce an iterator over blocks in the block report array. Contributed 
by Konstantin Shvachko.


 Introduce an iterator over blocks in the block report array.
 

 Key: HDFS-581
 URL: https://issues.apache.org/jira/browse/HDFS-581
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.20.1
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 0.21.0

 Attachments: BlockReportIterator.patch, BlockReportIterator.patch


 A block iterator will hide the internal implementation of the block report as 
 an array of three longs per block.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-546) DatanodeDescriptor block iterator should be BlockInfo based rather than Block.

2009-09-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750459#action_12750459
 ] 

Hudson commented on HDFS-546:
-

Integrated in Hadoop-Hdfs-trunk #70 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/70/])
. Fix a typo in CHANGES.txt


 DatanodeDescriptor block iterator should be BlockInfo based rather than Block.
 --

 Key: HDFS-546
 URL: https://issues.apache.org/jira/browse/HDFS-546
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.21.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 0.21.0

 Attachments: BlockIterator.patch


 {{DatanodeDescriptor.BlockIterator}} currently implements 
 {{IteratorBlock}}. I need to change it to implement {{IteratorBlockInfo}} 
 instead. Otherwise, I cannot filter out blocks under construction in order to 
 exclude them from balancing, as Hairong suggested in HDFS-517.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-173) Recursively deleting a directory with millions of files makes NameNode unresponsive for other commands until the deletion completes

2009-09-02 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-173:
-

Attachment: HDFS-173.4.patch

In the test the dfs.block.size was not a multiple fo io.bytes.per.checksum. 
This resulted in cluster not starting up. Setting io.bytes.per.checksum 
appropriately.

 Recursively deleting a directory with millions of files makes NameNode 
 unresponsive for other commands until the deletion completes
 ---

 Key: HDFS-173
 URL: https://issues.apache.org/jira/browse/HDFS-173
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 0.21.0

 Attachments: HDFS-173.1.patch, HDFS-173.2.patch, HDFS-173.3.patch, 
 HDFS-173.4.patch, HDFS-173.patch


 Delete a directory with millions of files. This could take several minutes 
 (observed 12 mins for 9 million files). While the operation is in progress 
 FSNamesystem lock is held and the requests from clients are not handled until 
 deletion completes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-173) Recursively deleting a directory with millions of files makes NameNode unresponsive for other commands until the deletion completes

2009-09-02 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-173:
-

Status: Patch Available  (was: Open)

 Recursively deleting a directory with millions of files makes NameNode 
 unresponsive for other commands until the deletion completes
 ---

 Key: HDFS-173
 URL: https://issues.apache.org/jira/browse/HDFS-173
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 0.21.0

 Attachments: HDFS-173.1.patch, HDFS-173.2.patch, HDFS-173.3.patch, 
 HDFS-173.4.patch, HDFS-173.patch


 Delete a directory with millions of files. This could take several minutes 
 (observed 12 mins for 9 million files). While the operation is in progress 
 FSNamesystem lock is held and the requests from clients are not handled until 
 deletion completes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-531) Renaming of configuration keys

2009-09-02 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-531:
--

Attachment: changed_config_keys.2.txt

Updated list of changed keys is attached incorporating the suggestions.

 Renaming of configuration keys
 --

 Key: HDFS-531
 URL: https://issues.apache.org/jira/browse/HDFS-531
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Fix For: 0.21.0

 Attachments: changed_config_keys.2.txt, changed_config_keys.txt


 Keys in configuration files should be standardized so that key names reflect 
 the components they are used in.
 For example:
dfs.backup.address should be renamed to dfs.namenode.backup.address 
dfs.data.dir   should be renamed to dfs.datanode.data.dir
 This change will impact both hdfs and common sources.
 Following convention is proposed:
 1. Any key related hdfs should begin with 'dfs'.
 2. Any key related to namenode, datanode or client should begin with 
 dfs.namenode, dfs.datanode or dfs.client respectively.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-551) Create new functional test for a block report.

2009-09-02 Thread Hairong Kuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750554#action_12750554
 ] 

Hairong Kuang commented on HDFS-551:


For the messWithBlocksLen() test, If a datanode reports a replica with the 
different length from the Complete block size in NN, NN should mark this 
replica as corrupt. NN should not update the block's length.

Please also remove all the commented statements in the patch.

 Create new functional test for a block report.
 --

 Key: HDFS-551
 URL: https://issues.apache.org/jira/browse/HDFS-551
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Attachments: BlockReportTestPlan.html, BlockReportTestPlan.html, 
 HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, 
 HDFS-551.patch, HDFS-551.patch


 It turned out that there's no test for block report functionality. The one 
 would be extremely valuable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-551) Create new functional test for a block report.

2009-09-02 Thread Hairong Kuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750555#action_12750555
 ] 

Hairong Kuang commented on HDFS-551:


DFSTestUtils has a set of utility functions to support file creation etc. You 
may take a look so you do not need to invent your own.

 Create new functional test for a block report.
 --

 Key: HDFS-551
 URL: https://issues.apache.org/jira/browse/HDFS-551
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Attachments: BlockReportTestPlan.html, BlockReportTestPlan.html, 
 HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, 
 HDFS-551.patch, HDFS-551.patch


 It turned out that there's no test for block report functionality. The one 
 would be extremely valuable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-456) Problems with dfs.name.edits.dirs as URI

2009-09-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750562#action_12750562
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-456:
-

 No more audit warnings I swear!
Well done!  You got the first successful build in the new hudson machine.

Really hope that this can be committed soon.  Otherwise, we cannot run hdfs on 
Windows.


 Problems with dfs.name.edits.dirs as URI
 

 Key: HDFS-456
 URL: https://issues.apache.org/jira/browse/HDFS-456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.21.0
Reporter: Konstantin Shvachko
Assignee: Luca Telloli
 Fix For: 0.21.0

 Attachments: failing-tests.zip, HDFS-456.patch, HDFS-456.patch, 
 HDFS-456.patch, HDFS-456.patch, HDFS-456.patch, HDFS-456.patch, 
 HDFS-456.patch, HDFS-456.patch


 There are several problems with recent commit of HDFS-396.
 # It does not work with default configuration file:///. Throws 
 {{IllegalArgumentException}}.
 # *ALL* hdfs tests fail on Windows because C:\mypath is treated as an 
 illegal URI. Backward compatibility is not provided.
 # {{IllegalArgumentException}} should not be thrown within hdfs code because 
 it is a {{RuntimException}}. We should throw {{IOException}} instead. This 
 was recently discussed in another jira.
 # Why do we commit patches without running unit tests and test-patch? This is 
 the minimum requirement for a patch to qualify as committable, right?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HDFS-549) Allow non fault-inject specific tests execution with an explicit -Dtestcase=... setting

2009-09-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-549.
-

Resolution: Fixed

Tested it with the followings:
{noformat}
ant run-with-fault-inject-testcaseonly
ant run-with-fault-inject-testcaseonly -Dtestcase=TestSimulatedFSDataset
ant test -Dtestcase=TestSimulatedFSDataset
{noformat}
It worked fine.

I have committed this.  Thanks, Cos!

 Allow non fault-inject specific tests execution with an explicit 
 -Dtestcase=... setting
 ---

 Key: HDFS-549
 URL: https://issues.apache.org/jira/browse/HDFS-549
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build
Affects Versions: 0.21.0
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Fix For: 0.21.0

 Attachments: HDFS-549.patch, HDFS-549.patch, HDFS-549.patch


 It is currently impossible to run non fault-injection tests with 
 fault-injected build. E.g.
 {noformat}
   ant run-test-hdfs-fault-inject -Dtestcase=TestFileCreation
 {noformat}
 because {{macro-test-runner}} does look for a specified test case only under 
 {{src/test/aop}} folder if a fault injection tests are ran. This render the 
 use of non fault injection tests impossible in the FI environment.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-565) Introduce block committing logic during new block allocation and file close.

2009-09-02 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750591#action_12750591
 ] 

Konstantin Shvachko commented on HDFS-565:
--

N Also, did you run ant test-patch?

Sure I did.

H When complete a file, the last block should be committed before progress 
check, but the penultimate block should not be completed until progress check 
is passed.

I agree. Actually, the same should apply in both cases for completeBlock() and 
for getAdditionalBlock().

Thanks for the patch, Konstantin.
I'll create a new jira to fix the issues.

 Introduce block committing logic during new block allocation and file close.
 

 Key: HDFS-565
 URL: https://issues.apache.org/jira/browse/HDFS-565
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs client, name-node
Affects Versions: Append Branch
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: Append Branch

 Attachments: CommitBlock.patch, CommitBlock.patch, HDFS-565.aj.patch


 {{ClientProtocol}} methods {{addBlock()}} and {{complete()}} need to include 
 additional parameter - a block, which has been successfully written to 
 data-nodes. By sending this block to the name-node the client confirms the 
 generation stamp of the block and its length. The block on the name-node 
 changes its state to committed and will become complete as long as one of the 
 finalized replicas reported by data-nodes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-588) Fix TestFiDataTransferProtocol and TestAppend2 failures in append branch.

2009-09-02 Thread Konstantin Shvachko (JIRA)
Fix TestFiDataTransferProtocol and TestAppend2 failures in append branch.
-

 Key: HDFS-588
 URL: https://issues.apache.org/jira/browse/HDFS-588
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node, test
Affects Versions: Append Branch
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: Append Branch


This to fix tests failures introduced by HDFS-565.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-581) Introduce an iterator over blocks in the block report array.

2009-09-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750595#action_12750595
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-581:
-

{quote}
ClientDatanodeProtocol
2. JavaDoc for getReplicaVisibleLength() is confusing. Could you please also 
make it 3 lines rather than 1.
{quote}
Why 3 lines?

{quote}
LocatedBlock
3. Does not need any of the changes.
{quote}
Could you explain more?

{quote}
LocatedBlocks
4. Here you do multiple field and method renames, combined with reformatting. I 
am lost.
{quote}
How can I help you?

{quote}
FSDatasetInterface
5. Why do you need to abstract getReplicaInfo()? It does not seem that 
SimulatedFSDataset actually need it anywhere, at least not yet.
{quote}
FSDatasetInterface is an *interface*.  By definition, all methods in an 
interface must be abstract.

{quote}
BlockManager
6. You factored out a part of the code into a new method. I cannot see what the 
new changes are.
{quote}
The new method is involved in FSNamesystem.  Could you take a look again?

{quote}
INodeFile
8. It is not necessary to remove public from method declaration and remove 
unused method.
{quote}
I remove the method because I see the following comment in the code.
{code}
// SHV !!! this is not used anywhere - remove
{code}
The comment was introduced by you in HDFS-517.  Could you explain what does it 
mean?

 Introduce an iterator over blocks in the block report array.
 

 Key: HDFS-581
 URL: https://issues.apache.org/jira/browse/HDFS-581
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.20.1
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 0.21.0

 Attachments: BlockReportIterator.patch, BlockReportIterator.patch


 A block iterator will hide the internal implementation of the block report as 
 an array of three longs per block.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-570) When opening a file for read, make the file length avaliable to client.

2009-09-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750596#action_12750596
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-570:
-

{quote}
ClientDatanodeProtocol
2. JavaDoc for getReplicaVisibleLength() is confusing. Could you please also 
make it 3 lines rather than 1.
{quote}
Why 3 lines?

{quote}
LocatedBlock
3. Does not need any of the changes.
{quote}
Could you explain more?

{quote}
LocatedBlocks
4. Here you do multiple field and method renames, combined with reformatting. I 
am lost.
{quote}
How can I help you?

{quote}
FSDatasetInterface
5. Why do you need to abstract getReplicaInfo()? It does not seem that 
SimulatedFSDataset actually need it anywhere, at least not yet.
{quote}
FSDatasetInterface is an *interface*.  By definition, all methods in an 
interface must be abstract.

{quote}
BlockManager
6. You factored out a part of the code into a new method. I cannot see what the 
new changes are.
{quote}
The new method is involved in FSNamesystem.  Could you take a look again?

{quote}
INodeFile
8. It is not necessary to remove public from method declaration and remove 
unused method.
{quote}
I remove the method because I see the following comment in the code.
{code}
// SHV !!! this is not used anywhere - remove
{code}
The comment was introduced by you in HDFS-517.  Could you explain what does it 
mean?

 When opening a file for read, make the file length avaliable to client.
 ---

 Key: HDFS-570
 URL: https://issues.apache.org/jira/browse/HDFS-570
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs client
Affects Versions: Append Branch
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: Append Branch

 Attachments: h570_20090828.patch


 In order to support read consistency, DFSClient needs the file length at the 
 file opening time.  In the current implmentation, DFSClient obtains the file 
 length at the file opening time but the length is inaccurate if the file is 
 being written.
 For more details, see Section 4 in the [append design 
 doc|https://issues.apache.org/jira/secure/attachment/12415768/appendDesign2.pdf].

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-581) Introduce an iterator over blocks in the block report array.

2009-09-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750597#action_12750597
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-581:
-

I have accidentally posted my HDFS-570 comment here  Sorry.

 Introduce an iterator over blocks in the block report array.
 

 Key: HDFS-581
 URL: https://issues.apache.org/jira/browse/HDFS-581
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.20.1
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 0.21.0

 Attachments: BlockReportIterator.patch, BlockReportIterator.patch


 A block iterator will hide the internal implementation of the block report as 
 an array of three longs per block.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-565) Introduce block committing logic during new block allocation and file close.

2009-09-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750600#action_12750600
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-565:
-

{quote}
N Also, did you run ant test-patch?

Sure I did.
{quote}
Then, please post the results next time.  Thanks.

 Introduce block committing logic during new block allocation and file close.
 

 Key: HDFS-565
 URL: https://issues.apache.org/jira/browse/HDFS-565
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs client, name-node
Affects Versions: Append Branch
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: Append Branch

 Attachments: CommitBlock.patch, CommitBlock.patch, HDFS-565.aj.patch


 {{ClientProtocol}} methods {{addBlock()}} and {{complete()}} need to include 
 additional parameter - a block, which has been successfully written to 
 data-nodes. By sending this block to the name-node the client confirms the 
 generation stamp of the block and its length. The block on the name-node 
 changes its state to committed and will become complete as long as one of the 
 finalized replicas reported by data-nodes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-589) Change block write protocol to support pipeline recovery

2009-09-02 Thread Hairong Kuang (JIRA)
Change block write protocol to support pipeline recovery


 Key: HDFS-589
 URL: https://issues.apache.org/jira/browse/HDFS-589
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: Append Branch
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: Append Branch


Current block write operation's header has the following fields:
blockId blockGS pipelineSize isRecovery clientName hasSource source 
#datanodesInDownStreamPipeline downstreamDatanodes
I'd like to change the header to be
blockId blockGS pipelineSize clientName  flags blockMinLen blockMaxLen newGS 
hasSource source #datanodesInDownStreamPipeline downstreamDatanodes

With this protocol change, pipeline recovery will be performed when a mew 
pipeline is set up.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-588) Fix TestFiDataTransferProtocol and TestAppend2 failures in append branch.

2009-09-02 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-588:
-

Attachment: HDFS-588.patch

I incorporated Konstanin's patch from HDFS_565 in this one.
So this patch fixes both problems.

 Fix TestFiDataTransferProtocol and TestAppend2 failures in append branch.
 -

 Key: HDFS-588
 URL: https://issues.apache.org/jira/browse/HDFS-588
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node, test
Affects Versions: Append Branch
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: Append Branch

 Attachments: HDFS-588.patch


 This to fix tests failures introduced by HDFS-565.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-549) Allow non fault-inject specific tests execution with an explicit -Dtestcase=... setting

2009-09-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750623#action_12750623
 ] 

Hudson commented on HDFS-549:
-

Integrated in Hadoop-Hdfs-trunk-Commit #12 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/12/])
. Add a new target, run-with-fault-inject-testcaseonly, which allows an 
execution of non-FI tests in FI-enable environment.  Contributed by Konstantin 
Boudnik


 Allow non fault-inject specific tests execution with an explicit 
 -Dtestcase=... setting
 ---

 Key: HDFS-549
 URL: https://issues.apache.org/jira/browse/HDFS-549
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build
Affects Versions: 0.21.0
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Fix For: 0.21.0

 Attachments: HDFS-549.patch, HDFS-549.patch, HDFS-549.patch


 It is currently impossible to run non fault-injection tests with 
 fault-injected build. E.g.
 {noformat}
   ant run-test-hdfs-fault-inject -Dtestcase=TestFileCreation
 {noformat}
 because {{macro-test-runner}} does look for a specified test case only under 
 {{src/test/aop}} folder if a fault injection tests are ran. This render the 
 use of non fault injection tests impossible in the FI environment.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-173) Recursively deleting a directory with millions of files makes NameNode unresponsive for other commands until the deletion completes

2009-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750626#action_12750626
 ] 

Hadoop QA commented on HDFS-173:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12418406/HDFS-173.4.patch
  against trunk revision 810504.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/7/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/7/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/7/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/7/console

This message is automatically generated.

 Recursively deleting a directory with millions of files makes NameNode 
 unresponsive for other commands until the deletion completes
 ---

 Key: HDFS-173
 URL: https://issues.apache.org/jira/browse/HDFS-173
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 0.21.0

 Attachments: HDFS-173.1.patch, HDFS-173.2.patch, HDFS-173.3.patch, 
 HDFS-173.4.patch, HDFS-173.patch


 Delete a directory with millions of files. This could take several minutes 
 (observed 12 mins for 9 million files). While the operation is in progress 
 FSNamesystem lock is held and the requests from clients are not handled until 
 deletion completes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-173) Recursively deleting a directory with millions of files makes NameNode unresponsive for other commands until the deletion completes

2009-09-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750631#action_12750631
 ] 

Suresh Srinivas commented on HDFS-173:
--

Failing test TestBlocksWithNotEnoughRack is not related to this patch.

 Recursively deleting a directory with millions of files makes NameNode 
 unresponsive for other commands until the deletion completes
 ---

 Key: HDFS-173
 URL: https://issues.apache.org/jira/browse/HDFS-173
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 0.21.0

 Attachments: HDFS-173.1.patch, HDFS-173.2.patch, HDFS-173.3.patch, 
 HDFS-173.4.patch, HDFS-173.patch


 Delete a directory with millions of files. This could take several minutes 
 (observed 12 mins for 9 million files). While the operation is in progress 
 FSNamesystem lock is held and the requests from clients are not handled until 
 deletion completes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-586) TestBlocksWithNotEnoughRacks fails

2009-09-02 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-586:
--

Attachment: HDFS-586.1.patch

 TestBlocksWithNotEnoughRacks fails
 --

 Key: HDFS-586
 URL: https://issues.apache.org/jira/browse/HDFS-586
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.21.0
Reporter: Hairong Kuang
Assignee: Jitendra Nath Pandey
 Fix For: 0.21.0

 Attachments: HDFS-586.1.patch


 TestBlocksWithNotEnoughRacks failed with the following error on my Mac laptop:
 {noformat}
 Testcase: testUnderReplicatedNotEnoughRacks took 33.209 sec
 FAILED
 null
 junit.framework.AssertionFailedError: null
 at 
 org.apache.hadoop.hdfs.server.namenode.TestBlocksWithNotEnoughRacks.testUnderReplicatedNotEnoughRacks(TestBlocksWithNotEnoughRacks.java:127)
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-586) TestBlocksWithNotEnoughRacks fails

2009-09-02 Thread Hairong Kuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750654#action_12750654
 ] 

Hairong Kuang commented on HDFS-586:


1. add a log message before the test goes to sleep.
2. instead of using while(true), I would prefer to put break condition there.
3, better use logging instead of System.out.print.

 TestBlocksWithNotEnoughRacks fails
 --

 Key: HDFS-586
 URL: https://issues.apache.org/jira/browse/HDFS-586
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.21.0
Reporter: Hairong Kuang
Assignee: Jitendra Nath Pandey
 Fix For: 0.21.0

 Attachments: HDFS-586.1.patch


 TestBlocksWithNotEnoughRacks failed with the following error on my Mac laptop:
 {noformat}
 Testcase: testUnderReplicatedNotEnoughRacks took 33.209 sec
 FAILED
 null
 junit.framework.AssertionFailedError: null
 at 
 org.apache.hadoop.hdfs.server.namenode.TestBlocksWithNotEnoughRacks.testUnderReplicatedNotEnoughRacks(TestBlocksWithNotEnoughRacks.java:127)
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-570) When opening a file for read, make the file length avaliable to client.

2009-09-02 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750657#action_12750657
 ] 

Konstantin Shvachko commented on HDFS-570:
--

 Could you explain more?

As I said before, you mix in your patch changes related to the current jira 
with unrelated refactoring of the code.
I listed most of the cases (1-8) that do not belong to the functionality you 
are implementing.
And which
# Obscure understanding of the new functionality you actually introduce.
# Make it hard to continue merging the trunk with the branch.

My proposal is to separate the implementation of the visible length from the 
refactoring of the code into 2 separate patches.
The refactoring should be applied then to the trunk and to the branch.
My personal preference is to postpone the refactoring until append is merged to 
the trunk.

 5. Why do you need to abstract getReplicaInfo()? It does not seem that 
 SimulatedFSDataset actually need it anywhere, at least not yet.
 FSDatasetInterface is an interface. By definition, all methods in an 
 interface must be abstract.

getReplicaInfo() is currently a private method of FSDataset. You are adding it 
to FSDatasetInterface. Based on the usage of the method I don't see a need for 
that.

 When opening a file for read, make the file length avaliable to client.
 ---

 Key: HDFS-570
 URL: https://issues.apache.org/jira/browse/HDFS-570
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs client
Affects Versions: Append Branch
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: Append Branch

 Attachments: h570_20090828.patch


 In order to support read consistency, DFSClient needs the file length at the 
 file opening time.  In the current implmentation, DFSClient obtains the file 
 length at the file opening time but the length is inaccurate if the file is 
 being written.
 For more details, see Section 4 in the [append design 
 doc|https://issues.apache.org/jira/secure/attachment/12415768/appendDesign2.pdf].

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-570) When opening a file for read, make the file length avaliable to client.

2009-09-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750663#action_12750663
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-570:
-

{quote}
 Could you explain more?

As I said before, you mix in your patch changes related to the current jira 
with unrelated refactoring of the code.
I listed most of the cases (1-8) that do not belong to the functionality you 
are implementing.
And which

   1. Obscure understanding of the new functionality you actually introduce.
   2. Make it hard to continue merging the trunk with the branch.

My proposal is to separate the implementation of the visible length from the 
refactoring of the code into 2 separate patches.
The refactoring should be applied then to the trunk and to the branch.
My personal preference is to postpone the refactoring until append is merged to 
the trunk.
{quote}
There is no code refactoring in LocatedBlock except for the imports.  What do 
you mean by LocatedBlock 3. Does not need any of the changes.?

{quote}
getReplicaInfo() is currently a private method of FSDataset. You are adding it 
to FSDatasetInterface. Based on the usage of the method I don't see a need for 
that.
{quote}
If you look at the patch again, you will find that getReplicaInfo(..) is called 
in DataNode.  It cannot be invoked if it is a private method of FSDataset.

 When opening a file for read, make the file length avaliable to client.
 ---

 Key: HDFS-570
 URL: https://issues.apache.org/jira/browse/HDFS-570
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs client
Affects Versions: Append Branch
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: Append Branch

 Attachments: h570_20090828.patch


 In order to support read consistency, DFSClient needs the file length at the 
 file opening time.  In the current implmentation, DFSClient obtains the file 
 length at the file opening time but the length is inaccurate if the file is 
 being written.
 For more details, see Section 4 in the [append design 
 doc|https://issues.apache.org/jira/secure/attachment/12415768/appendDesign2.pdf].

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-588) Fix TestFiDataTransferProtocol and TestAppend2 failures in append branch.

2009-09-02 Thread Hairong Kuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750665#action_12750665
 ] 

Hairong Kuang commented on HDFS-588:


+1 The complete block logic change looks good.

 Fix TestFiDataTransferProtocol and TestAppend2 failures in append branch.
 -

 Key: HDFS-588
 URL: https://issues.apache.org/jira/browse/HDFS-588
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node, test
Affects Versions: Append Branch
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: Append Branch

 Attachments: HDFS-588.patch


 This to fix tests failures introduced by HDFS-565.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-591) Namenode should mark the replica as corrupted if a DN has reported different length of a Complete block

2009-09-02 Thread Konstantin Boudnik (JIRA)
Namenode should mark the replica as corrupted if a DN has reported different 
length of a Complete block
---

 Key: HDFS-591
 URL: https://issues.apache.org/jira/browse/HDFS-591
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Reporter: Konstantin Boudnik


it has been found that NameNode is updating the length of a Complete block when 
its reported by a DataNode. As demonstrated by BlockReport.messWithBlocksLen() 
test ([see HDFS-551's 
patch|https://issues.apache.org/jira/secure/attachment/12418437/HDFS-551.patch]),
 when the length of blocks is changed and a block report is forced for the set 
of the blocks NameNode updates these lengths in memory, instead of marking such 
replicas as corrupted.

It seems to be a wrong behavior.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-551) Create new functional test for a block report.

2009-09-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-551:


Affects Version/s: Append Branch

 Create new functional test for a block report.
 --

 Key: HDFS-551
 URL: https://issues.apache.org/jira/browse/HDFS-551
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: Append Branch
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Attachments: BlockReportTestPlan.html, BlockReportTestPlan.html, 
 HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, 
 HDFS-551.patch, HDFS-551.patch, HDFS-551.patch


 It turned out that there's no test for block report functionality. The one 
 would be extremely valuable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HDFS-590) When trying to rename a non-existent path, LocalFileSystem throws an FileNotFoundException, while HDFS returns false

2009-09-02 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey reassigned HDFS-590:
-

Assignee: Jitendra Nath Pandey

 When trying to rename a non-existent path, LocalFileSystem throws an 
 FileNotFoundException, while HDFS returns false
 

 Key: HDFS-590
 URL: https://issues.apache.org/jira/browse/HDFS-590
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Fix For: 0.21.0


 HDFS should also throw FileNotFoundException instead of returning false.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HDFS-584) Fail the fault-inject build if any advices are mis-bound

2009-09-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik resolved HDFS-584.
-

Resolution: Won't Fix

It turns out that mis-bound warnings couldn't be simply turned into errors, 
because this is default behavior of AJC and I don't see a way to configure it 
differently.

The only possibility is to intercept the compiler output and parse it in order 
to find if any warnings were produced and abort the build in that case. But it 
seems to be too much trouble to do so.

Another possibility is to extend {{test-patch}} so it will verify that a new 
patch doesn't increase the number of AJC warnings like it currently does for 
javac.

 Fail the fault-inject build if any advices are mis-bound
 

 Key: HDFS-584
 URL: https://issues.apache.org/jira/browse/HDFS-584
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik

 Whenever AspectJ compiler can't bind an advice it issues a warning message, 
 but the build process isn't failing because of it.
 The build has to fail though, because mis-bound advices lead to failing tests 
 which are hard to explain otherwise.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-590) When trying to rename a non-existent path, LocalFileSystem throws an FileNotFoundException, while HDFS returns false

2009-09-02 Thread Jitendra Nath Pandey (JIRA)
When trying to rename a non-existent path, LocalFileSystem throws an 
FileNotFoundException, while HDFS returns false


 Key: HDFS-590
 URL: https://issues.apache.org/jira/browse/HDFS-590
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
 Fix For: 0.21.0


HDFS should also throw FileNotFoundException instead of returning false.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-551) Create new functional test for a block report.

2009-09-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-551:


Attachment: HDFS-551.patch

Hairong's comments are addressed. Thanks for the review.
Also, I'm gonna open new JIRA for the incorrect NN's behavior and move this one 
into the append branch

 Create new functional test for a block report.
 --

 Key: HDFS-551
 URL: https://issues.apache.org/jira/browse/HDFS-551
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Attachments: BlockReportTestPlan.html, BlockReportTestPlan.html, 
 HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, 
 HDFS-551.patch, HDFS-551.patch, HDFS-551.patch


 It turned out that there's no test for block report functionality. The one 
 would be extremely valuable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-582) Create a fsckraid tool to verify the consistency of erasure codes for HDFS-503

2009-09-02 Thread dhruba borthakur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhruba borthakur updated HDFS-582:
--

Tags: fb

 Create a fsckraid tool to verify the consistency of erasure codes for HDFS-503
 --

 Key: HDFS-582
 URL: https://issues.apache.org/jira/browse/HDFS-582
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Rodrigo Schmidt
Assignee: Rodrigo Schmidt

 HDFS-503 should also have a tool to test the consistency of the parity files 
 generated, so that data corruption can be detected and treated.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-588) Fix TestFiDataTransferProtocol and TestAppend2 failures in append branch.

2009-09-02 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750747#action_12750747
 ] 

Konstantin Shvachko commented on HDFS-588:
--

Ran the complete test suite. No failures.
test-patch results are here
{code}
.
 [exec] There appear to be 150 release audit warnings before the patch and 
150 release audit warnings after applying the patch.
 [exec] +1 overall.  
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] +1 tests included.  The patch appears to include 3 new or 
modified tests.
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
 [exec] 
==
 [exec] 
==
 [exec] Finished build.
 [exec] 
==
 [exec] 
==
BUILD SUCCESSFUL
{code}

 Fix TestFiDataTransferProtocol and TestAppend2 failures in append branch.
 -

 Key: HDFS-588
 URL: https://issues.apache.org/jira/browse/HDFS-588
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node, test
Affects Versions: Append Branch
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: Append Branch

 Attachments: HDFS-588.patch


 This to fix tests failures introduced by HDFS-565.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-586) TestBlocksWithNotEnoughRacks fails

2009-09-02 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750754#action_12750754
 ] 

Jitendra Nath Pandey commented on HDFS-586:
---

 The granularity of dfs.replication.interval is 1 sec. Therefore, reducing the 
sleep period to a small value doesn't improve the run time of the test.

 TestBlocksWithNotEnoughRacks fails
 --

 Key: HDFS-586
 URL: https://issues.apache.org/jira/browse/HDFS-586
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.21.0
Reporter: Hairong Kuang
Assignee: Jitendra Nath Pandey
 Fix For: 0.21.0

 Attachments: HDFS-586.1.patch, HDFS-586.2.patch


 TestBlocksWithNotEnoughRacks failed with the following error on my Mac laptop:
 {noformat}
 Testcase: testUnderReplicatedNotEnoughRacks took 33.209 sec
 FAILED
 null
 junit.framework.AssertionFailedError: null
 at 
 org.apache.hadoop.hdfs.server.namenode.TestBlocksWithNotEnoughRacks.testUnderReplicatedNotEnoughRacks(TestBlocksWithNotEnoughRacks.java:127)
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-586) TestBlocksWithNotEnoughRacks fails

2009-09-02 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-586:
--

Status: Patch Available  (was: Open)

 TestBlocksWithNotEnoughRacks fails
 --

 Key: HDFS-586
 URL: https://issues.apache.org/jira/browse/HDFS-586
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.21.0
Reporter: Hairong Kuang
Assignee: Jitendra Nath Pandey
 Fix For: 0.21.0

 Attachments: HDFS-586.1.patch, HDFS-586.2.patch


 TestBlocksWithNotEnoughRacks failed with the following error on my Mac laptop:
 {noformat}
 Testcase: testUnderReplicatedNotEnoughRacks took 33.209 sec
 FAILED
 null
 junit.framework.AssertionFailedError: null
 at 
 org.apache.hadoop.hdfs.server.namenode.TestBlocksWithNotEnoughRacks.testUnderReplicatedNotEnoughRacks(TestBlocksWithNotEnoughRacks.java:127)
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-551) Create new functional test for a block report.

2009-09-02 Thread Hairong Kuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750760#action_12750760
 ] 

Hairong Kuang commented on HDFS-551:


+1

 Create new functional test for a block report.
 --

 Key: HDFS-551
 URL: https://issues.apache.org/jira/browse/HDFS-551
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: Append Branch
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Attachments: BlockReportTestPlan.html, BlockReportTestPlan.html, 
 HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, 
 HDFS-551.patch, HDFS-551.patch, HDFS-551.patch


 It turned out that there's no test for block report functionality. The one 
 would be extremely valuable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-551) Create new functional test for a block report.

2009-09-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-551:


Issue Type: Sub-task  (was: Test)
Parent: HDFS-265

 Create new functional test for a block report.
 --

 Key: HDFS-551
 URL: https://issues.apache.org/jira/browse/HDFS-551
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: Append Branch
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Attachments: BlockReportTestPlan.html, BlockReportTestPlan.html, 
 HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, 
 HDFS-551.patch, HDFS-551.patch, HDFS-551.patch


 It turned out that there's no test for block report functionality. The one 
 would be extremely valuable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-578) Support for using server default values for blockSize and replication when creating a file

2009-09-02 Thread Kan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kan Zhang updated HDFS-578:
---

Status: Open  (was: Patch Available)

 Support for using server default values for blockSize and replication when 
 creating a file
 --

 Key: HDFS-578
 URL: https://issues.apache.org/jira/browse/HDFS-578
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Reporter: Kan Zhang
Assignee: Kan Zhang
 Attachments: h578-13.patch


 This is a sub-task of HADOOP-4952. This improvement makes it possible for a 
 client to specify that it wants to use the server default values for 
 blockSize and replication params when creating a file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-578) Support for using server default values for blockSize and replication when creating a file

2009-09-02 Thread Kan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kan Zhang updated HDFS-578:
---

Attachment: h578-13.patch

 Support for using server default values for blockSize and replication when 
 creating a file
 --

 Key: HDFS-578
 URL: https://issues.apache.org/jira/browse/HDFS-578
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Reporter: Kan Zhang
Assignee: Kan Zhang
 Attachments: h578-13.patch


 This is a sub-task of HADOOP-4952. This improvement makes it possible for a 
 client to specify that it wants to use the server default values for 
 blockSize and replication params when creating a file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-578) Support for using server default values for blockSize and replication when creating a file

2009-09-02 Thread Kan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750776#action_12750776
 ] 

Kan Zhang commented on HDFS-578:


I meant cannot.

 Support for using server default values for blockSize and replication when 
 creating a file
 --

 Key: HDFS-578
 URL: https://issues.apache.org/jira/browse/HDFS-578
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Reporter: Kan Zhang
Assignee: Kan Zhang
 Attachments: h578-13.patch


 This is a sub-task of HADOOP-4952. This improvement makes it possible for a 
 client to specify that it wants to use the server default values for 
 blockSize and replication params when creating a file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-551) Create new functional test for a block report.

2009-09-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-551:


Attachment: HDFS-551.patch

Non-principal change to improve the code's readability.

 Create new functional test for a block report.
 --

 Key: HDFS-551
 URL: https://issues.apache.org/jira/browse/HDFS-551
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: Append Branch
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Attachments: BlockReportTestPlan.html, BlockReportTestPlan.html, 
 HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, 
 HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, HDFS-551.patch


 It turned out that there's no test for block report functionality. The one 
 would be extremely valuable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-235) Add support for byte-ranges to hftp

2009-09-02 Thread Bill Zeller (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Zeller updated HDFS-235:
-

Fix Version/s: 0.21.0
Affects Version/s: 0.21.0
 Release Note: Added support for byte ranges in HftpFileSystem and 
support for serving byte ranges of files in StreamFile.
   Status: Patch Available  (was: Open)

HsftpFileSystem has not been modified. 

 Add support for byte-ranges to hftp
 ---

 Key: HDFS-235
 URL: https://issues.apache.org/jira/browse/HDFS-235
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 0.21.0
Reporter: Venkatesh S
Assignee: Bill Zeller
 Fix For: 0.21.0

 Attachments: hdfs-235-1.patch


 Support should be similar to http byte-serving.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-586) TestBlocksWithNotEnoughRacks fails

2009-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750799#action_12750799
 ] 

Hadoop QA commented on HDFS-586:


+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12418451/HDFS-586.2.patch
  against trunk revision 810631.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/8/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/8/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/8/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/8/console

This message is automatically generated.

 TestBlocksWithNotEnoughRacks fails
 --

 Key: HDFS-586
 URL: https://issues.apache.org/jira/browse/HDFS-586
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.21.0
Reporter: Hairong Kuang
Assignee: Jitendra Nath Pandey
 Fix For: 0.21.0

 Attachments: HDFS-586.1.patch, HDFS-586.2.patch


 TestBlocksWithNotEnoughRacks failed with the following error on my Mac laptop:
 {noformat}
 Testcase: testUnderReplicatedNotEnoughRacks took 33.209 sec
 FAILED
 null
 junit.framework.AssertionFailedError: null
 at 
 org.apache.hadoop.hdfs.server.namenode.TestBlocksWithNotEnoughRacks.testUnderReplicatedNotEnoughRacks(TestBlocksWithNotEnoughRacks.java:127)
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-549) Allow non fault-inject specific tests execution with an explicit -Dtestcase=... setting

2009-09-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750800#action_12750800
 ] 

Hudson commented on HDFS-549:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #8 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/8/])
. Add a new target, run-with-fault-inject-testcaseonly, which allows an 
execution of non-FI tests in FI-enable environment.  Contributed by Konstantin 
Boudnik


 Allow non fault-inject specific tests execution with an explicit 
 -Dtestcase=... setting
 ---

 Key: HDFS-549
 URL: https://issues.apache.org/jira/browse/HDFS-549
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build
Affects Versions: 0.21.0
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Fix For: 0.21.0

 Attachments: HDFS-549.patch, HDFS-549.patch, HDFS-549.patch


 It is currently impossible to run non fault-injection tests with 
 fault-injected build. E.g.
 {noformat}
   ant run-test-hdfs-fault-inject -Dtestcase=TestFileCreation
 {noformat}
 because {{macro-test-runner}} does look for a specified test case only under 
 {{src/test/aop}} folder if a fault injection tests are ran. This render the 
 use of non fault injection tests impossible in the FI environment.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HDFS-588) Fix TestFiDataTransferProtocol and TestAppend2 failures in append branch.

2009-09-02 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-588.
--

  Resolution: Fixed
Hadoop Flags: [Reviewed]

I just committed this.

 Fix TestFiDataTransferProtocol and TestAppend2 failures in append branch.
 -

 Key: HDFS-588
 URL: https://issues.apache.org/jira/browse/HDFS-588
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node, test
Affects Versions: Append Branch
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: Append Branch

 Attachments: HDFS-588.patch


 This to fix tests failures introduced by HDFS-565.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-235) Add support for byte-ranges to hftp

2009-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750821#action_12750821
 ] 

Hadoop QA commented on HDFS-235:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12418461/hdfs-235-1.patch
  against trunk revision 810631.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 7 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The applied patch generated 98 javac compiler warnings (more 
than the trunk's current 95 warnings).

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/2/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/2/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/2/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/2/console

This message is automatically generated.

 Add support for byte-ranges to hftp
 ---

 Key: HDFS-235
 URL: https://issues.apache.org/jira/browse/HDFS-235
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 0.21.0
Reporter: Venkatesh S
Assignee: Bill Zeller
 Fix For: 0.21.0

 Attachments: hdfs-235-1.patch


 Support should be similar to http byte-serving.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-551) Create new functional test for a block report.

2009-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12750832#action_12750832
 ] 

Hadoop QA commented on HDFS-551:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12418462/HDFS-551.patch
  against trunk revision 810631.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/9/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/9/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/9/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/9/console

This message is automatically generated.

 Create new functional test for a block report.
 --

 Key: HDFS-551
 URL: https://issues.apache.org/jira/browse/HDFS-551
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.21.0
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik
 Attachments: BlockReportTestPlan.html, BlockReportTestPlan.html, 
 HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, 
 HDFS-551.patch, HDFS-551.patch, HDFS-551.patch, HDFS-551.patch


 It turned out that there's no test for block report functionality. The one 
 would be extremely valuable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.