[jira] Assigned: (HDFS-96) HDFS does not support blocks greater than 2GB

2010-09-22 Thread dhruba borthakur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-96?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhruba borthakur reassigned HDFS-96:


Assignee: Patrick Kling  (was: dhruba borthakur)

 HDFS does not support blocks greater than 2GB
 -

 Key: HDFS-96
 URL: https://issues.apache.org/jira/browse/HDFS-96
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: dhruba borthakur
Assignee: Patrick Kling
 Attachments: hdfslargeblkcrash.tar.gz, largeBlockSize1.txt


 HDFS currently does not support blocks greater than 2GB in size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-96) HDFS does not support blocks greater than 2GB

2010-09-22 Thread Patrick Kling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-96?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Kling updated HDFS-96:
--

Attachment: HDFS-96.patch

This patch fixes an integer overflow problem that causes a failure with block 
sizes of more than 2GB.

The added test in org.apache.hadoop.hdfs.TestLargeBlock verifies that we can 
create, write to and correctly read from a file when the block size is  2 GB.

The following test cases fail both on the current version of trunk before 
applying this patch and after applying this patch (i.e., no new test failures 
were introduced by this patch):

[junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery FAILED
[junit] Test org.apache.hadoop.hdfs.TestFileStatus FAILED
[junit] Test org.apache.hadoop.hdfs.TestHDFSServerPorts FAILED
[junit] Test org.apache.hadoop.hdfs.TestHDFSTrash FAILED (timeout)
[junit] Test org.apache.hadoop.hdfs.server.namenode.TestBackupNode FAILED
[junit] Test org.apache.hadoop.fs.TestHDFSFileContextMainOperations FAILED
[junit] Test org.apache.hadoop.hdfs.TestFileConcurrentReader FAILED 
(timeout)
[junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockReport FAILED
[junit] Test org.apache.hadoop.hdfs.server.namenode.TestBlockTokenWithDFS 
FAILED
[junit] Test org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery FAILED
[junit] Test org.apache.hadoop.hdfs.TestFiHFlush FAILED


 HDFS does not support blocks greater than 2GB
 -

 Key: HDFS-96
 URL: https://issues.apache.org/jira/browse/HDFS-96
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: dhruba borthakur
Assignee: Patrick Kling
 Attachments: HDFS-96.patch, hdfslargeblkcrash.tar.gz, 
 largeBlockSize1.txt


 HDFS currently does not support blocks greater than 2GB in size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-96) HDFS does not support blocks greater than 2GB

2010-09-22 Thread Patrick Kling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-96?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Kling updated HDFS-96:
--

Status: Patch Available  (was: Open)

 HDFS does not support blocks greater than 2GB
 -

 Key: HDFS-96
 URL: https://issues.apache.org/jira/browse/HDFS-96
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: dhruba borthakur
Assignee: Patrick Kling
 Attachments: HDFS-96.patch, hdfslargeblkcrash.tar.gz, 
 largeBlockSize1.txt


 HDFS currently does not support blocks greater than 2GB in size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1400) HDFS federation: Introduced block pool ID into DataTransferProtocol

2010-09-22 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-1400:
--

Attachment: HDFS-1400.1.patch

New patch with a part of the previous change made in HDFS-1407 merged from 
trunk.

 HDFS federation: Introduced block pool ID into DataTransferProtocol
 ---

 Key: HDFS-1400
 URL: https://issues.apache.org/jira/browse/HDFS-1400
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: Federation Branch
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: Federation Branch

 Attachments: HDFS-1400.1.patch, HDFS-1400.patch


 Block Pool ID needs to be introduced in to DataTransferProtocol

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1400) HDFS federation: Introduced block pool ID into DataTransferProtocol

2010-09-22 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12913808#action_12913808
 ] 

Suresh Srinivas commented on HDFS-1400:
---

I ran unit tests. There are some known failures:
TestStartup
TestDFSUpgradeFromImage
TestCheckpoint

Here is the testpatch result:
 [exec] +1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 18 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
 [exec] 
 [exec] +1 system tests framework.  The patch passed system tests 
framework compile.


 HDFS federation: Introduced block pool ID into DataTransferProtocol
 ---

 Key: HDFS-1400
 URL: https://issues.apache.org/jira/browse/HDFS-1400
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: Federation Branch
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: Federation Branch

 Attachments: HDFS-1400.1.patch, HDFS-1400.patch


 Block Pool ID needs to be introduced in to DataTransferProtocol

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1400) HDFS federation: Introduce block pool ID into DataTransferProtocol

2010-09-22 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-1400:
--

Summary: HDFS federation: Introduce block pool ID into DataTransferProtocol 
 (was: HDFS federation: Introduced block pool ID into DataTransferProtocol)

 HDFS federation: Introduce block pool ID into DataTransferProtocol
 --

 Key: HDFS-1400
 URL: https://issues.apache.org/jira/browse/HDFS-1400
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: Federation Branch
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: Federation Branch

 Attachments: HDFS-1400.1.patch, HDFS-1400.patch


 Block Pool ID needs to be introduced in to DataTransferProtocol

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1304) There is no unit test for HftpFileSystem.open(..)

2010-09-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-1304:
-

Attachment: h1304_20100922.patch

Thanks Cos for reviewing it.

h1304_20100922.patch: 

 please have messages for the asserts
The assert messages will be printed only when the test failed.  In such case, 
we can check the stack trace.

  remove commented code
Moved the  commented codes to a method.  

 these imports aren't used anywhere and can be removed as well ...
Removed.



 There is no unit test for HftpFileSystem.open(..)
 -

 Key: HDFS-1304
 URL: https://issues.apache.org/jira/browse/HDFS-1304
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h1304_20100921.patch, h1304_20100922.patch


 HftpFileSystem.open(..) first opens an URL connection to namenode's 
 FileDataServlet and then is redirected to datanode's StreamFile servlet.  
 Such redirection does not work in the unit test environment because the 
 redirect URL uses real hostname instead of localhost.
 One way to get around it is to use fault-injection in order to replace the 
 real hostname with localhost.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1304) There is no unit test for HftpFileSystem.open(..)

2010-09-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12913831#action_12913831
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-1304:
--

{noformat}
 [exec] +1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 9 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec] 
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
 [exec] 
 [exec] +1 system tests framework.  The patch passed system tests 
framework compile.
 [exec] 
{noformat}
I also have run the new unit test.  It works fine.

 There is no unit test for HftpFileSystem.open(..)
 -

 Key: HDFS-1304
 URL: https://issues.apache.org/jira/browse/HDFS-1304
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h1304_20100921.patch, h1304_20100922.patch


 HftpFileSystem.open(..) first opens an URL connection to namenode's 
 FileDataServlet and then is redirected to datanode's StreamFile servlet.  
 Such redirection does not work in the unit test environment because the 
 redirect URL uses real hostname instead of localhost.
 One way to get around it is to use fault-injection in order to replace the 
 real hostname with localhost.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1304) There is no unit test for HftpFileSystem.open(..)

2010-09-22 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-1304:
-

Hadoop Flags: [Reviewed]

+1 patch looks good.

 There is no unit test for HftpFileSystem.open(..)
 -

 Key: HDFS-1304
 URL: https://issues.apache.org/jira/browse/HDFS-1304
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h1304_20100921.patch, h1304_20100922.patch


 HftpFileSystem.open(..) first opens an URL connection to namenode's 
 FileDataServlet and then is redirected to datanode's StreamFile servlet.  
 Such redirection does not work in the unit test environment because the 
 redirect URL uses real hostname instead of localhost.
 One way to get around it is to use fault-injection in order to replace the 
 real hostname with localhost.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-1414) HDFS federation : fix unit test cases

2010-09-22 Thread Tanping Wang (JIRA)
HDFS federation : fix unit test cases
-

 Key: HDFS-1414
 URL: https://issues.apache.org/jira/browse/HDFS-1414
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tanping Wang




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1414) HDFS federation : fix unit test cases

2010-09-22 Thread Tanping Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12913837#action_12913837
 ] 

Tanping Wang commented on HDFS-1414:


This is JIRA is used to track the fixes of unit test cases. 

 HDFS federation : fix unit test cases
 -

 Key: HDFS-1414
 URL: https://issues.apache.org/jira/browse/HDFS-1414
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tanping Wang



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1304) There is no unit test for HftpFileSystem.open(..)

2010-09-22 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-1304:
-

   Status: Resolved  (was: Patch Available)
Fix Version/s: 0.22.0
   Resolution: Fixed

I have committed this.

 There is no unit test for HftpFileSystem.open(..)
 -

 Key: HDFS-1304
 URL: https://issues.apache.org/jira/browse/HDFS-1304
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 0.22.0

 Attachments: h1304_20100921.patch, h1304_20100922.patch


 HftpFileSystem.open(..) first opens an URL connection to namenode's 
 FileDataServlet and then is redirected to datanode's StreamFile servlet.  
 Such redirection does not work in the unit test environment because the 
 redirect URL uses real hostname instead of localhost.
 One way to get around it is to use fault-injection in order to replace the 
 real hostname with localhost.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1414) HDFS federation : fix unit test cases

2010-09-22 Thread Tanping Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanping Wang updated HDFS-1414:
---

Attachment: hadoop-14-dfs-dir.tgz

== fix TestDFSUpgradeFromImage

This test case generates its testing data by untar hadoop-14-dfs-dir.tgz.  
There are ten VERSION files under each storage directories provided by the tar 
file and they are,

./name2/current/VERSION
./data/data3/current/VERSION
./data/data4/current/VERSION
./data/data8/current/VERSION
./data/data7/current/VERSION
./data/data6/current/VERSION
./data/data2/current/VERSION
./data/data5/current/VERSION
./data/data1/current/VERSION
./name1/current/VERSION

As we introduced clusterID and blockpoolID, we need to add two new entries,

clusterID=test-cid
blockpoolID=test-bpid

into the 10 above listed VERSION files.  The tgz file is re-generated 
accordingly and uploaded.  It needs to be checked into 

 src/test/hdfs/org/apache/hadoop/hdfs/hadoop-14-dfs-dir.tgz

to replace the currently one.


 HDFS federation : fix unit test cases
 -

 Key: HDFS-1414
 URL: https://issues.apache.org/jira/browse/HDFS-1414
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tanping Wang
 Attachments: hadoop-14-dfs-dir.tgz




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-1415) Group name is not properly set in inodes

2010-09-22 Thread Scott Chen (JIRA)
Group name is not properly set in inodes


 Key: HDFS-1415
 URL: https://issues.apache.org/jira/browse/HDFS-1415
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Scott Chen
Assignee: Scott Chen


In NameNode.create() and NameNode.mkdirs() do not pass the group name. So it is 
not properly set.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1415) Group name is not properly set in inodes

2010-09-22 Thread Scott Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Chen updated HDFS-1415:
-

Attachment: HDFS-1415.txt

 Group name is not properly set in inodes
 

 Key: HDFS-1415
 URL: https://issues.apache.org/jira/browse/HDFS-1415
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HDFS-1415.txt


 In NameNode.create() and NameNode.mkdirs() do not pass the group name. So it 
 is not properly set.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1415) Group name is not properly set in inodes

2010-09-22 Thread dhruba borthakur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12913884#action_12913884
 ] 

dhruba borthakur commented on HDFS-1415:


Code looks good to me. I wonder why the HDFS namenode was not using the group 
information in the first place, it seemed like a pretty reasonable thing to do. 
Maybe Owen has some historical perspective on this?

Do we need to put some javadocs to state that if there are multiple groups in 
the ugi, then the first one takes efffect as far as file permissions are 
concerned?

Also, this looks like an incompatible change, especially because files will now 
have the appropriate group information.

 Group name is not properly set in inodes
 

 Key: HDFS-1415
 URL: https://issues.apache.org/jira/browse/HDFS-1415
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Scott Chen
Assignee: Scott Chen
 Attachments: HDFS-1415.txt


 In NameNode.create() and NameNode.mkdirs() do not pass the group name. So it 
 is not properly set.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-1408) Herriot NN and DN clients should vend statistics

2010-09-22 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-1408:
-

Attachment: jmx.patch

Here's a preliminary and very rough JMX based patch. Please be advised, that 
this patch is for y20s thus it mixes up what needs to be going into Common and 
Hdfs. The actual patch for 0.22 will have three different parts for all 
components.


 Herriot NN and DN clients should vend statistics
 

 Key: HDFS-1408
 URL: https://issues.apache.org/jira/browse/HDFS-1408
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 0.22.0
Reporter: Al Thompson
Assignee: Konstantin Boudnik
 Attachments: HADOOP-6927.y20s.patch, HDFS-1408.patch, jmx.patch


 The HDFS web user interface serves useful information through dfshealth.jsp 
 and dfsnodelist.jsp.
 The Herriot interface to the namenode and datanode (as implemented in 
 NNClient and DNClient, respectively) would benefit from the addition of some 
 way to channel this information. In the case of DNClient this can be an 
 injected method that returns a DatanodeDescriptor relevant to the underlying 
 datanode.
 There seems to be no analagous NamenodeDescriptor. It may be useful to add 
 this as a facade to a visitor that aggregates values across the filesystem 
 datanodes. These values are (from dfshealth JSP):
 Configured Capacity
 DFS Used
 Non DFS Used
 DFS Remaining
 DFS Used%
 DFS Remaining%
 Live Nodes
 Dead Nodes
 Decommissioning Nodes
 Number of Under-Replicated Blocks
 Attributes reflecting the web user interface header may also be useful such 
 as When-Started, Version, When-Compiled, and Upgrade-Status.
 A NamenodeDescriptor would essentially push down the code in dfshealth web 
 UI behind a more general abstraction. If it is objectionable to make this 
 class available in HDFS, perhaps this could be packaged in a Herriot specific 
 way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1403) add -truncate option to fsck

2010-09-22 Thread dhruba borthakur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12913899#action_12913899
 ] 

dhruba borthakur commented on HDFS-1403:


It is possible that we make the fsck -truncate only cleanup files only those 
files whose last modtime is within the last hour or so. I am a little worried 
that otherwise it might erroneously cleanup files that do nto need fixing. 

 add -truncate option to fsck
 

 Key: HDFS-1403
 URL: https://issues.apache.org/jira/browse/HDFS-1403
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs client, name-node
Reporter: sam rash

 When running fsck, it would be useful to be able to tell hdfs to truncate any 
 corrupt file to the last valid position in the latest block.  Then, when 
 running hadoop fsck, an admin can cleanup the filesystem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1403) add -truncate option to fsck

2010-09-22 Thread sam rash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12913908#action_12913908
 ] 

sam rash commented on HDFS-1403:


can you elaborate?

also, this truncate option will have to work on open files.  I think 
-list-corruptfiles only works on closed ones.  we have to handle the missing 
last block problem (the main reason I filed this)


 add -truncate option to fsck
 

 Key: HDFS-1403
 URL: https://issues.apache.org/jira/browse/HDFS-1403
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs client, name-node
Reporter: sam rash

 When running fsck, it would be useful to be able to tell hdfs to truncate any 
 corrupt file to the last valid position in the latest block.  Then, when 
 running hadoop fsck, an admin can cleanup the filesystem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-1403) add -truncate option to fsck

2010-09-22 Thread dhruba borthakur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12913910#action_12913910
 ] 

dhruba borthakur commented on HDFS-1403:


Ok, if this fix works only on currently-open files, then that will be fine. It 
will automatically disallow fixing of files that are closed... sounds good.

 add -truncate option to fsck
 

 Key: HDFS-1403
 URL: https://issues.apache.org/jira/browse/HDFS-1403
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs client, name-node
Reporter: sam rash

 When running fsck, it would be useful to be able to tell hdfs to truncate any 
 corrupt file to the last valid position in the latest block.  Then, when 
 running hadoop fsck, an admin can cleanup the filesystem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.