[jira] Commented: (HDFS-669) Add unit tests

2009-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776847#action_12776847
 ] 

Hadoop QA commented on HDFS-669:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12424661/HDFS-669.patch
  against trunk revision 835179.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The applied patch generated 22 javac compiler warnings (more 
than the trunk's current 20 warnings).

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/71/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/71/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/71/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/71/console

This message is automatically generated.

> Add unit tests 
> ---
>
> Key: HDFS-669
> URL: https://issues.apache.org/jira/browse/HDFS-669
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.21.0, 0.22.0
>Reporter: Eli Collins
>Assignee: Konstantin Boudnik
> Attachments: HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, 
> HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, HDFS669.patch
>
>
> Most HDFS tests are functional tests that test a feature end to end by 
> running a mini cluster. We should add more tests like TestReplication that 
> attempt to stress individual classes in isolation, ie by stubbing out 
> dependencies without running a mini cluster. This allows for more fine-grain 
> testing and making tests run much more quickly because they avoid the cost of 
> cluster setup and teardown. If it makes sense to use another framework 
> besides junit we should standardize with MAPREDUCE-1050. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-755) Read multiple checksum chunks at once in DFSInputStream

2009-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776826#action_12776826
 ] 

Hadoop QA commented on HDFS-755:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12424688/hdfs-755.txt
  against trunk revision 835179.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/105/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/105/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/105/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/105/console

This message is automatically generated.

> Read multiple checksum chunks at once in DFSInputStream
> ---
>
> Key: HDFS-755
> URL: https://issues.apache.org/jira/browse/HDFS-755
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-755.txt, hdfs-755.txt, hdfs-755.txt
>
>
> HADOOP-3205 adds the ability for FSInputChecker subclasses to read multiple 
> checksum chunks in a single call to readChunk. This is the HDFS-side use of 
> that new feature.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-669) Add unit tests

2009-11-11 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776820#action_12776820
 ] 

Jakob Homan commented on HDFS-669:
--

Cool, Cos.  I'll look at it first thing in the morning.

> Add unit tests 
> ---
>
> Key: HDFS-669
> URL: https://issues.apache.org/jira/browse/HDFS-669
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.21.0, 0.22.0
>Reporter: Eli Collins
>Assignee: Konstantin Boudnik
> Attachments: HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, 
> HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, HDFS669.patch
>
>
> Most HDFS tests are functional tests that test a feature end to end by 
> running a mini cluster. We should add more tests like TestReplication that 
> attempt to stress individual classes in isolation, ie by stubbing out 
> dependencies without running a mini cluster. This allows for more fine-grain 
> testing and making tests run much more quickly because they avoid the cost of 
> cluster setup and teardown. If it makes sense to use another framework 
> besides junit we should standardize with MAPREDUCE-1050. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-669) Add unit tests

2009-11-11 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-669:


Affects Version/s: 0.22.0
   0.21.0
   Status: Patch Available  (was: Open)

Ready for the verification

> Add unit tests 
> ---
>
> Key: HDFS-669
> URL: https://issues.apache.org/jira/browse/HDFS-669
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.21.0, 0.22.0
>Reporter: Eli Collins
>Assignee: Konstantin Boudnik
> Attachments: HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, 
> HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, HDFS669.patch
>
>
> Most HDFS tests are functional tests that test a feature end to end by 
> running a mini cluster. We should add more tests like TestReplication that 
> attempt to stress individual classes in isolation, ie by stubbing out 
> dependencies without running a mini cluster. This allows for more fine-grain 
> testing and making tests run much more quickly because they avoid the cost of 
> cluster setup and teardown. If it makes sense to use another framework 
> besides junit we should standardize with MAPREDUCE-1050. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.

2009-11-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776791#action_12776791
 ] 

stack commented on HDFS-630:


Cosmin:  I applied your patch but it seems to bring on an issue where I get 
"java.io.IOException: Cannot complete block: block has not been COMMITTED by 
the client" closing a log file. See the hdfs-user mailing list.  Grep for 
message subject: "Cannot complete block: block has not been COMMITTED by the 
client".  Do you see this?  Thanks.

> In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
> datanodes when locating the next block.
> ---
>
> Key: HDFS-630
> URL: https://issues.apache.org/jira/browse/HDFS-630
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs client
>Affects Versions: 0.21.0
>Reporter: Ruyue Ma
>Assignee: Ruyue Ma
>Priority: Minor
> Fix For: 0.21.0
>
> Attachments: 0001-Fix-HDFS-630-for-0.21.patch, HDFS-630.patch
>
>
> created from hdfs-200.
> If during a write, the dfsclient sees that a block replica location for a 
> newly allocated block is not-connectable, it re-requests the NN to get a 
> fresh set of replica locations of the block. It tries this 
> dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
> each retry ( see DFSClient.nextBlockOutputStream).
> This setting works well when you have a reasonable size cluster; if u have 
> few datanodes in the cluster, every retry maybe pick the dead-datanode and 
> the above logic bails out.
> Our solution: when getting block location from namenode, we give nn the 
> excluded datanodes. The list of dead datanodes is only for one block 
> allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-755) Read multiple checksum chunks at once in DFSInputStream

2009-11-11 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-755:
-

Status: Patch Available  (was: Open)

> Read multiple checksum chunks at once in DFSInputStream
> ---
>
> Key: HDFS-755
> URL: https://issues.apache.org/jira/browse/HDFS-755
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-755.txt, hdfs-755.txt, hdfs-755.txt
>
>
> HADOOP-3205 adds the ability for FSInputChecker subclasses to read multiple 
> checksum chunks in a single call to readChunk. This is the HDFS-side use of 
> that new feature.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-755) Read multiple checksum chunks at once in DFSInputStream

2009-11-11 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-755:
-

Status: Open  (was: Patch Available)

Evidently I was compiling against a patched jar before (still getting used to 
the new mvn-based jar rigmarole)... uploading new patch momentarily to break 
the compile-time dependency on HADOOP-3205.

> Read multiple checksum chunks at once in DFSInputStream
> ---
>
> Key: HDFS-755
> URL: https://issues.apache.org/jira/browse/HDFS-755
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-755.txt, hdfs-755.txt, hdfs-755.txt
>
>
> HADOOP-3205 adds the ability for FSInputChecker subclasses to read multiple 
> checksum chunks in a single call to readChunk. This is the HDFS-side use of 
> that new feature.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-755) Read multiple checksum chunks at once in DFSInputStream

2009-11-11 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-755:
-

Attachment: hdfs-755.txt

> Read multiple checksum chunks at once in DFSInputStream
> ---
>
> Key: HDFS-755
> URL: https://issues.apache.org/jira/browse/HDFS-755
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-755.txt, hdfs-755.txt, hdfs-755.txt
>
>
> HADOOP-3205 adds the ability for FSInputChecker subclasses to read multiple 
> checksum chunks in a single call to readChunk. This is the HDFS-side use of 
> that new feature.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-761) Failure to process rename operation from edits log due to quota verification

2009-11-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776779#action_12776779
 ] 

Suresh Srinivas commented on HDFS-761:
--

committed the patch to branch 0.20.

> Failure to process rename operation from edits log due to quota verification
> 
>
> Key: HDFS-761
> URL: https://issues.apache.org/jira/browse/HDFS-761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.2, 0.21.0, 0.22.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: 0.20.2, 0.21.0, 0.22.0
>
> Attachments: hdfs-761.1.patch, hdfs-761.1.patch, 
> hdfs-761.1.rel20.patch, hdfs-761.patch, hdfs-761.rel20.patch, 
> hdfs-761.rel21.patch
>
>
> When processing edits log, quota verification is not done and the used quota 
> for directories is not updated. The update is done at the end of processing 
> edits log. This rule is broken by change introduced in HDFS-677. This causes 
> namenode from handling rename operation from edits log due to quota 
> verification failure. Once this happens, namenode does not proceed edits log 
> any further. This results in check point failure on backup node or secondary 
> namenode. This also prevents namenode from coming up.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-754) Reduce ivy console output to ovservable level

2009-11-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776778#action_12776778
 ] 

Hudson commented on HDFS-754:
-

Integrated in Hadoop-Hdfs-trunk-Commit #106 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/106/])
. Reduce ivy console output to ovservable level. Contributed by Konstantin 
Boudnik


> Reduce ivy console output to ovservable level
> -
>
> Key: HDFS-754
> URL: https://issues.apache.org/jira/browse/HDFS-754
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: HDFS-754.patch
>
>
> It is very hard to see what's going in the build because ivy is literally 
> flood the console with nonsensical messages...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-724) Pipeline close hangs if one of the datanode is not responsive.

2009-11-11 Thread Hairong Kuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776777#action_12776777
 ] 

Hairong Kuang commented on HDFS-724:


I am quite torn at whether a heartbeat should be 
1. a regular empty packet and be handled exactly the same as a regular data 
packet; or
2. a special empty packet with a seq# of -1 and be treated differently from a 
regular packet. For example, it does not get added to the packet queue at both 
client or datanode side.

Solution 1 is much simpler than solution 2. But is there any side effect?

> Pipeline close hangs if one of the datanode is not responsive.
> --
>
> Key: HDFS-724
> URL: https://issues.apache.org/jira/browse/HDFS-724
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node, hdfs client
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Hairong Kuang
> Attachments: h724_20091021.patch
>
>
> In the new pipeline design, pipeline close is implemented by sending an 
> additional empty packet.  If one of the datanode does not response to this 
> empty packet, the pipeline hangs.  It seems that there is no timeout.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-754) Reduce ivy console output to ovservable level

2009-11-11 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-754:


   Resolution: Fixed
Fix Version/s: 0.22.0
 Hadoop Flags: [Reviewed]
   Status: Resolved  (was: Patch Available)

I've just committed it.

> Reduce ivy console output to ovservable level
> -
>
> Key: HDFS-754
> URL: https://issues.apache.org/jira/browse/HDFS-754
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: HDFS-754.patch
>
>
> It is very hard to see what's going in the build because ivy is literally 
> flood the console with nonsensical messages...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-761) Failure to process rename operation from edits log due to quota verification

2009-11-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-761:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

The test failure related to TestBlockReport is unrelated to this change. I 
committed this change to trunk and branch 0.21.

> Failure to process rename operation from edits log due to quota verification
> 
>
> Key: HDFS-761
> URL: https://issues.apache.org/jira/browse/HDFS-761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.2, 0.21.0, 0.22.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: 0.20.2, 0.21.0, 0.22.0
>
> Attachments: hdfs-761.1.patch, hdfs-761.1.patch, 
> hdfs-761.1.rel20.patch, hdfs-761.patch, hdfs-761.rel20.patch, 
> hdfs-761.rel21.patch
>
>
> When processing edits log, quota verification is not done and the used quota 
> for directories is not updated. The update is done at the end of processing 
> edits log. This rule is broken by change introduced in HDFS-677. This causes 
> namenode from handling rename operation from edits log due to quota 
> verification failure. Once this happens, namenode does not proceed edits log 
> any further. This results in check point failure on backup node or secondary 
> namenode. This also prevents namenode from coming up.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-761) Failure to process rename operation from edits log due to quota verification

2009-11-11 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776755#action_12776755
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-761:
-

+1 the 0.21 patch looks good.

> Failure to process rename operation from edits log due to quota verification
> 
>
> Key: HDFS-761
> URL: https://issues.apache.org/jira/browse/HDFS-761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.2, 0.21.0, 0.22.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: 0.20.2, 0.21.0, 0.22.0
>
> Attachments: hdfs-761.1.patch, hdfs-761.1.patch, 
> hdfs-761.1.rel20.patch, hdfs-761.patch, hdfs-761.rel20.patch, 
> hdfs-761.rel21.patch
>
>
> When processing edits log, quota verification is not done and the used quota 
> for directories is not updated. The update is done at the end of processing 
> edits log. This rule is broken by change introduced in HDFS-677. This causes 
> namenode from handling rename operation from edits log due to quota 
> verification failure. Once this happens, namenode does not proceed edits log 
> any further. This results in check point failure on backup node or secondary 
> namenode. This also prevents namenode from coming up.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-761) Failure to process rename operation from edits log due to quota verification

2009-11-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-761:
-

Attachment: hdfs-761.rel21.patch

Attaching a patch for branch 0.21

> Failure to process rename operation from edits log due to quota verification
> 
>
> Key: HDFS-761
> URL: https://issues.apache.org/jira/browse/HDFS-761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.2, 0.21.0, 0.22.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: 0.20.2, 0.21.0, 0.22.0
>
> Attachments: hdfs-761.1.patch, hdfs-761.1.patch, 
> hdfs-761.1.rel20.patch, hdfs-761.patch, hdfs-761.rel20.patch, 
> hdfs-761.rel21.patch
>
>
> When processing edits log, quota verification is not done and the used quota 
> for directories is not updated. The update is done at the end of processing 
> edits log. This rule is broken by change introduced in HDFS-677. This causes 
> namenode from handling rename operation from edits log due to quota 
> verification failure. Once this happens, namenode does not proceed edits log 
> any further. This results in check point failure on backup node or secondary 
> namenode. This also prevents namenode from coming up.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-457) better handling of volume failure in Data Node storage

2009-11-11 Thread Erik Steffl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Steffl updated HDFS-457:
-

Attachment: jira.HDFS-457.branch-0.20-internal.patch

> better handling of volume failure in Data Node storage
> --
>
> Key: HDFS-457
> URL: https://issues.apache.org/jira/browse/HDFS-457
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Boris Shkolnik
>Assignee: Boris Shkolnik
> Fix For: 0.21.0
>
> Attachments: HDFS-457-1.patch, HDFS-457-2.patch, HDFS-457-2.patch, 
> HDFS-457-2.patch, HDFS-457-3.patch, HDFS-457.patch, 
> jira.HDFS-457.branch-0.20-internal.patch, TestFsck.zip
>
>
> Current implementation shuts DataNode down completely when one of the 
> configured volumes of the storage fails.
> This is rather wasteful behavior because it  decreases utilization (good 
> storage becomes unavailable) and imposes extra load on the system 
> (replication of the blocks from the good volumes). These problems will become 
> even more prominent when we move to mixed (heterogeneous) clusters with many 
> more volumes per Data Node.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-755) Read multiple checksum chunks at once in DFSInputStream

2009-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776730#action_12776730
 ] 

Hadoop QA commented on HDFS-755:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12424669/hdfs-755.txt
  against trunk revision 835110.

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The patch appears to cause tar ant target to fail.

-1 findbugs.  The patch appears to cause Findbugs to fail.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

-1 contrib tests.  The patch failed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/70/testReport/
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/70/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/70/console

This message is automatically generated.

> Read multiple checksum chunks at once in DFSInputStream
> ---
>
> Key: HDFS-755
> URL: https://issues.apache.org/jira/browse/HDFS-755
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-755.txt, hdfs-755.txt
>
>
> HADOOP-3205 adds the ability for FSInputChecker subclasses to read multiple 
> checksum chunks in a single call to readChunk. This is the HDFS-side use of 
> that new feature.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-761) Failure to process rename operation from edits log due to quota verification

2009-11-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776728#action_12776728
 ] 

Hudson commented on HDFS-761:
-

Integrated in Hadoop-Hdfs-trunk-Commit #105 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/105/])
. Fix failure to process rename operation from edits log due to quota 
verification. Contributed by Suresh Srinivas.


> Failure to process rename operation from edits log due to quota verification
> 
>
> Key: HDFS-761
> URL: https://issues.apache.org/jira/browse/HDFS-761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.2, 0.21.0, 0.22.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: 0.20.2, 0.21.0, 0.22.0
>
> Attachments: hdfs-761.1.patch, hdfs-761.1.patch, 
> hdfs-761.1.rel20.patch, hdfs-761.patch, hdfs-761.rel20.patch
>
>
> When processing edits log, quota verification is not done and the used quota 
> for directories is not updated. The update is done at the end of processing 
> edits log. This rule is broken by change introduced in HDFS-677. This causes 
> namenode from handling rename operation from edits log due to quota 
> verification failure. Once this happens, namenode does not proceed edits log 
> any further. This results in check point failure on backup node or secondary 
> namenode. This also prevents namenode from coming up.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-757) Unit tests failure for RAID

2009-11-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776725#action_12776725
 ] 

Hudson commented on HDFS-757:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #104 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/104/])


> Unit tests failure for RAID
> ---
>
> Key: HDFS-757
> URL: https://issues.apache.org/jira/browse/HDFS-757
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: contrib/raid
>Reporter: dhruba borthakur
>Assignee: dhruba borthakur
> Fix For: 0.22.0
>
> Attachments: compilationraid.txt, compilationraid.txt
>
>
> The unit tests for RaidNode were broken  after the patch for HADOOP-5107 was 
> checked in. I will provide a patch shortly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-611) Heartbeats times from Datanodes increase when there are plenty of blocks to delete

2009-11-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776726#action_12776726
 ] 

Hudson commented on HDFS-611:
-

Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #104 (See 
[http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/104/])


> Heartbeats times from Datanodes increase when there are plenty of blocks to 
> delete
> --
>
> Key: HDFS-611
> URL: https://issues.apache.org/jira/browse/HDFS-611
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.20.1, 0.21.0, 0.22.0
>Reporter: dhruba borthakur
>Assignee: Zheng Shao
> Fix For: 0.22.0
>
> Attachments: HDFS-611.branch-19.patch, HDFS-611.branch-19.v2.patch, 
> HDFS-611.branch-20.patch, HDFS-611.branch-20.v2.patch, 
> HDFS-611.branch-20.v6.patch, HDFS-611.trunk.patch, HDFS-611.trunk.v2.patch, 
> HDFS-611.trunk.v3.patch, HDFS-611.trunk.v4.patch, HDFS-611.trunk.v5.patch, 
> HDFS-611.trunk.v6.patch
>
>
> I am seeing that when we delete a large directory that has plenty of blocks, 
> the heartbeat times from datanodes increase significantly from the normal 
> value of 3 seconds to as large as 50 seconds or so. The heartbeat thread in 
> the Datanode deletes a bunch of blocks sequentially, this causes the 
> heartbeat times to increase.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-764) Moving Access Token implementation from Common to HDFS

2009-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776724#action_12776724
 ] 

Hadoop QA commented on HDFS-764:


+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12424648/764-01.patch
  against trunk revision 834377.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 21 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/104/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/104/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/104/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/104/console

This message is automatically generated.

> Moving Access Token implementation from Common to HDFS
> --
>
> Key: HDFS-764
> URL: https://issues.apache.org/jira/browse/HDFS-764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.21.0
>Reporter: Kan Zhang
>Assignee: Kan Zhang
> Fix For: 0.21.0
>
> Attachments: 764-01.patch
>
>
> This is the HDFS changes of HADOOP-6367.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-755) Read multiple checksum chunks at once in DFSInputStream

2009-11-11 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-755:
-

Status: Patch Available  (was: Open)

> Read multiple checksum chunks at once in DFSInputStream
> ---
>
> Key: HDFS-755
> URL: https://issues.apache.org/jira/browse/HDFS-755
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-755.txt, hdfs-755.txt
>
>
> HADOOP-3205 adds the ability for FSInputChecker subclasses to read multiple 
> checksum chunks in a single call to readChunk. This is the HDFS-side use of 
> that new feature.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-755) Read multiple checksum chunks at once in DFSInputStream

2009-11-11 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-755:
-

Attachment: hdfs-755.txt

Here's an updated patch which fixes some behavior when running against an 
unpatched Common. If Common includes HADOOP-3205, it will be faster, and if it 
doesn't include HADOOP-3205, it should still work at the old speed.

I also ran some more benchmarks over lunch, running "fs -cat bigfile bigfile 
bigfile ...20 times..." repeatedly with and without the patch. This differs 
from my previous benchmark in that each JVM runs for a good 40-50 seconds - 
enough time to fully JIT the code, etc. The patch is about a 3.4% speedup 
compared to trunk for these long reads as well (at 95% significance level).

> Read multiple checksum chunks at once in DFSInputStream
> ---
>
> Key: HDFS-755
> URL: https://issues.apache.org/jira/browse/HDFS-755
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-755.txt, hdfs-755.txt
>
>
> HADOOP-3205 adds the ability for FSInputChecker subclasses to read multiple 
> checksum chunks in a single call to readChunk. This is the HDFS-side use of 
> that new feature.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-94) The "Heap Size" in HDFS web ui may not be accurate

2009-11-11 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776711#action_12776711
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-94:


> I wonder why it shows a total of 17.78GB instead of 20GB

Would it be the case that you have hit some limit?  The following is quoted 
from [java man 
page|http://java.sun.com/javase/6/docs/technotes/tools/solaris/java.html]:

{quote}
On Solaris 7 and Solaris 8 SPARC platforms, the upper limit for this value is 
approximately 4000m minus overhead amounts. On Solaris 2.6 and x86 platforms, 
the upper limit is approximately 2000m minus overhead amounts. On Linux 
platforms, the upper limit is approximately 2000m minus overhead amounts. 
{quote}

> The "Heap Size" in HDFS web ui may not be accurate
> --
>
> Key: HDFS-94
> URL: https://issues.apache.org/jira/browse/HDFS-94
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsz Wo (Nicholas), SZE
>
> It seems that the Heap Size shown in HDFS web UI is not accurate.  It keeps 
> showing 100% of usage.  e.g.
> {noformat}
> Heap Size is 10.01 GB / 10.01 GB (100%) 
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-669) Add unit tests

2009-11-11 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-669:


Attachment: HDFS-669.patch

After another conversation with Jakob I've bought his point that while the 
second test case is Ok example of a true unit test, it isn't a good example of 
how to use Mockito framework.

I've removed it completely.

> Add unit tests 
> ---
>
> Key: HDFS-669
> URL: https://issues.apache.org/jira/browse/HDFS-669
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Eli Collins
>Assignee: Konstantin Boudnik
> Attachments: HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, 
> HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, HDFS669.patch
>
>
> Most HDFS tests are functional tests that test a feature end to end by 
> running a mini cluster. We should add more tests like TestReplication that 
> attempt to stress individual classes in isolation, ie by stubbing out 
> dependencies without running a mini cluster. This allows for more fine-grain 
> testing and making tests run much more quickly because they avoid the cost of 
> cluster setup and teardown. If it makes sense to use another framework 
> besides junit we should standardize with MAPREDUCE-1050. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-669) Add unit tests

2009-11-11 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-669:


Attachment: HDFS-669.patch

Duplication of testcase execution is fixed. Now junit will be looking for the 
specified testcase under an appropriate {{suite.type}} directory. E.g. if 
'run-test-unit' is executed then only {{src/test/unit}} will be explored, etc.

A comment is added for the questionable testcase explaining why it worth 
including.

> Add unit tests 
> ---
>
> Key: HDFS-669
> URL: https://issues.apache.org/jira/browse/HDFS-669
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Eli Collins
>Assignee: Konstantin Boudnik
> Attachments: HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, 
> HDFS-669.patch, HDFS-669.patch, HDFS669.patch
>
>
> Most HDFS tests are functional tests that test a feature end to end by 
> running a mini cluster. We should add more tests like TestReplication that 
> attempt to stress individual classes in isolation, ie by stubbing out 
> dependencies without running a mini cluster. This allows for more fine-grain 
> testing and making tests run much more quickly because they avoid the cost of 
> cluster setup and teardown. If it makes sense to use another framework 
> besides junit we should standardize with MAPREDUCE-1050. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-763) DataBlockScanner reporting of bad blocks is slightly misleading

2009-11-11 Thread Raghu Angadi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776689#action_12776689
 ] 

Raghu Angadi commented on HDFS-763:
---

+1. yes. it should be incremented only for real errors.

> DataBlockScanner reporting of bad blocks is slightly misleading
> ---
>
> Key: HDFS-763
> URL: https://issues.apache.org/jira/browse/HDFS-763
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.20.1
>Reporter: dhruba borthakur
>Assignee: dhruba borthakur
>
> The Datanode generates a report of the period block scanning that verifies 
> crcs. It reports something like the following:
> Scans since restart : 192266
> Scan errors since restart : 33
> Transient scan errors : 0
> The statement saying that there were 33 errors is slightly midleading because 
> these are not crc mismatches, rather the block was being deleted when the crc 
> verification was about to happen. 
> I propose that DataBlockScanner.totalScanErrors is not updated if the 
> dataset.getFile(block) is null, i.e. the block is now deleted from the 
> datanode. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-669) Add unit tests

2009-11-11 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776681#action_12776681
 ] 

Jakob Homan commented on HDFS-669:
--

* I ran with the patch and using -Dtestcase=TestFSNamesystem and the test was 
executed twice.  
* The testDirNQuota() test concerns me as it is not readily apparent what role 
the spied-on instance is playing and thus may not be a good example of how to 
use Mockito.  At the very least, an explanation of how the spy is playing a 
role would be good.  This is comparison to the other test, where it is clear 
that the isInSameMode() call is intercepted and re-defined.

> Add unit tests 
> ---
>
> Key: HDFS-669
> URL: https://issues.apache.org/jira/browse/HDFS-669
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Eli Collins
>Assignee: Konstantin Boudnik
> Attachments: HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, 
> HDFS-669.patch, HDFS669.patch
>
>
> Most HDFS tests are functional tests that test a feature end to end by 
> running a mini cluster. We should add more tests like TestReplication that 
> attempt to stress individual classes in isolation, ie by stubbing out 
> dependencies without running a mini cluster. This allows for more fine-grain 
> testing and making tests run much more quickly because they avoid the cost of 
> cluster setup and teardown. If it makes sense to use another framework 
> besides junit we should standardize with MAPREDUCE-1050. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-764) Moving Access Token implementation from Common to HDFS

2009-11-11 Thread Kan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kan Zhang updated HDFS-764:
---

Fix Version/s: 0.21.0
Affects Version/s: 0.21.0
 Hadoop Flags: [Incompatible change]
   Status: Patch Available  (was: Open)

> Moving Access Token implementation from Common to HDFS
> --
>
> Key: HDFS-764
> URL: https://issues.apache.org/jira/browse/HDFS-764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.21.0
>Reporter: Kan Zhang
>Assignee: Kan Zhang
> Fix For: 0.21.0
>
> Attachments: 764-01.patch
>
>
> This is the HDFS changes of HADOOP-6367.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-764) Moving Access Token implementation from Common to HDFS

2009-11-11 Thread Kan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kan Zhang updated HDFS-764:
---

Attachment: 764-01.patch

attaching a patch for the changes in HDFS. No new test is added since there is 
no functional change (only moving existing code around and renaming).

> Moving Access Token implementation from Common to HDFS
> --
>
> Key: HDFS-764
> URL: https://issues.apache.org/jira/browse/HDFS-764
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.21.0
>Reporter: Kan Zhang
>Assignee: Kan Zhang
> Fix For: 0.21.0
>
> Attachments: 764-01.patch
>
>
> This is the HDFS changes of HADOOP-6367.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-669) Add unit tests

2009-11-11 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-669:


Attachment: HDFS-669.patch

Missing JavaDocs are also added

> Add unit tests 
> ---
>
> Key: HDFS-669
> URL: https://issues.apache.org/jira/browse/HDFS-669
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Eli Collins
>Assignee: Konstantin Boudnik
> Attachments: HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, 
> HDFS-669.patch, HDFS669.patch
>
>
> Most HDFS tests are functional tests that test a feature end to end by 
> running a mini cluster. We should add more tests like TestReplication that 
> attempt to stress individual classes in isolation, ie by stubbing out 
> dependencies without running a mini cluster. This allows for more fine-grain 
> testing and making tests run much more quickly because they avoid the cost of 
> cluster setup and teardown. If it makes sense to use another framework 
> besides junit we should standardize with MAPREDUCE-1050. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-764) Moving Access Token implementation from Common to HDFS

2009-11-11 Thread Kan Zhang (JIRA)
Moving Access Token implementation from Common to HDFS
--

 Key: HDFS-764
 URL: https://issues.apache.org/jira/browse/HDFS-764
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Kan Zhang
Assignee: Kan Zhang


This is the HDFS changes of HADOOP-6367.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-718) configuration parameter to prevent accidental formatting of HDFS filesystem

2009-11-11 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776570#action_12776570
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-718:
-

If we are going to do this, the new configuration key should be listed in 
org.apache.hadoop.hdfs.DFSConfigKeys.

> configuration parameter to prevent accidental formatting of HDFS filesystem
> ---
>
> Key: HDFS-718
> URL: https://issues.apache.org/jira/browse/HDFS-718
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.22.0
> Environment: Any
>Reporter: Andrew Ryan
>Assignee: Andrew Ryan
>Priority: Minor
> Attachments: HDFS-718.patch.txt
>
>
> Currently, any time the NameNode is not running, an HDFS filesystem will 
> accept the 'format' command, and will duly format itself. There are those of 
> us who have multi-PB HDFS filesystems who are really quite uncomfortable with 
> this behavior. There is "Y/N" confirmation in the format command, but if the 
> formatter genuinely believes themselves to be doing the right thing, the 
> filesystem will be formatted.
> This patch adds a configuration parameter to the namenode, 
> dfs.namenode.support.allowformat, which defaults to "true," the current 
> behavior: always allow formatting if the NameNode is down or some other 
> process is not holding the namenode lock. But if 
> dfs.namenode.support.allowformat is set to "false," the NameNode will not 
> allow itself to be formatted until this config parameter is changed to "true".
> The general idea is that for production HDFS filesystems, the user would 
> format the HDFS once, then set dfs.namenode.support.allowformat to "false" 
> for all time.
> The attached patch was generated against trunk and +1's on my test machine. 
> We have a 0.20 version that we are using in our cluster as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-669) Add unit tests

2009-11-11 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-669:


Attachment: HDFS-669.patch

Slight modification to use JUnit provided setup and teardown facilities instead 
of manual setting and cleaning the resources before and after each test case.

> Add unit tests 
> ---
>
> Key: HDFS-669
> URL: https://issues.apache.org/jira/browse/HDFS-669
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Eli Collins
>Assignee: Konstantin Boudnik
> Attachments: HDFS-669.patch, HDFS-669.patch, HDFS-669.patch, 
> HDFS669.patch
>
>
> Most HDFS tests are functional tests that test a feature end to end by 
> running a mini cluster. We should add more tests like TestReplication that 
> attempt to stress individual classes in isolation, ie by stubbing out 
> dependencies without running a mini cluster. This allows for more fine-grain 
> testing and making tests run much more quickly because they avoid the cost of 
> cluster setup and teardown. If it makes sense to use another framework 
> besides junit we should standardize with MAPREDUCE-1050. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HDFS-138) data node process should not die if one dir goes bad

2009-11-11 Thread Robert Chansler (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Chansler resolved HDFS-138.
--

Resolution: Duplicate

HDFS-457 is a close approximation.

> data node process should not die if one dir goes bad
> 
>
> Key: HDFS-138
> URL: https://issues.apache.org/jira/browse/HDFS-138
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>
> When multiple directories are configured for the data node process to use to 
> store blocks, it currently exits when one of them is not writable.   Instead, 
> it should either completely ignore that directory or attempt to continue 
> reading and then marking it unusable if reads fail.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-761) Failure to process rename operation from edits log due to quota verification

2009-11-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12776387#action_12776387
 ] 

Hadoop QA commented on HDFS-761:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12424571/hdfs-761.1.patch
  against trunk revision 834377.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/103/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/103/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/103/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/103/console

This message is automatically generated.

> Failure to process rename operation from edits log due to quota verification
> 
>
> Key: HDFS-761
> URL: https://issues.apache.org/jira/browse/HDFS-761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.2, 0.21.0, 0.22.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: 0.20.2, 0.21.0, 0.22.0
>
> Attachments: hdfs-761.1.patch, hdfs-761.1.patch, 
> hdfs-761.1.rel20.patch, hdfs-761.patch, hdfs-761.rel20.patch
>
>
> When processing edits log, quota verification is not done and the used quota 
> for directories is not updated. The update is done at the end of processing 
> edits log. This rule is broken by change introduced in HDFS-677. This causes 
> namenode from handling rename operation from edits log due to quota 
> verification failure. Once this happens, namenode does not proceed edits log 
> any further. This results in check point failure on backup node or secondary 
> namenode. This also prevents namenode from coming up.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.