[jira] [Commented] (HDFS-3089) Move FSDatasetInterface and other related classes/interfaces to a package

2012-03-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239424#comment-13239424
 ] 

Hudson commented on HDFS-3089:
--

Integrated in Hadoop-Hdfs-trunk #997 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/997/])
(recommit) HDFS-3089. Move FSDatasetInterface and the related classes to a 
package. (Revision 1305603)
Revert 1305590 for HDFS-3089. (Revision 1305598)
HDFS-3089. Move FSDatasetInterface and the related classes to a package. 
(Revision 1305590)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305603
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockVolumeChoosingPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDatasetInterface.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/RollingLogs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/RoundRobinVolumesPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/RollingLogs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/RoundRobinVolumeChoosingPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/VolumeChoosingPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DataNodeCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestRoundRobinVolumesPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/TestRoundRobinVolumeChoosingPolicy.java

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305598
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockVolumeChoosingPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 

[jira] [Commented] (HDFS-2413) Add public APIs for safemode

2012-03-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239425#comment-13239425
 ] 

Hudson commented on HDFS-2413:
--

Integrated in Hadoop-Hdfs-trunk #997 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/997/])
HDFS-2413. Add an API DistributedFileSystem.isInSafeMode() and change 
DistributedFileSystem to @InterfaceAudience.LimitedPrivate.  Contributed by 
harsh (Revision 1305632)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305632
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java


 Add public APIs for safemode
 

 Key: HDFS-2413
 URL: https://issues.apache.org/jira/browse/HDFS-2413
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Harsh J
 Fix For: 0.24.0, 0.23.3

 Attachments: HDFS-2413.patch, HDFS-2413.patch, HDFS-2413.patch, 
 HDFS-2413.patch


 Currently the APIs for safe-mode are part of DistributedFileSystem, which is 
 supposed to be a private interface. However, dependent software often wants 
 to wait until the NN is out of safemode. Though it could poll trying to 
 create a file and catching SafeModeException, we should consider making some 
 of these APIs public.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3129) NetworkTopology: add test that getLeaf should check for invalid topologies

2012-03-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239421#comment-13239421
 ] 

Hudson commented on HDFS-3129:
--

Integrated in Hadoop-Hdfs-trunk #997 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/997/])
Fix CHANGES.txt for HDFS-3129 (Revision 1305631)
HDFS-3129. NetworkTopology: add test that getLeaf should check for invalid 
topologies. Contributed by Colin Patrick McCabe (Revision 1305628)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305631
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305628
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java


 NetworkTopology: add test that getLeaf should check for invalid topologies
 --

 Key: HDFS-3129
 URL: https://issues.apache.org/jira/browse/HDFS-3129
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 1.1.0, 0.23.3

 Attachments: HDFS-3129-b1.001.patch, HDFS-3129.001.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2941) Add an administrative command to download a copy of the fsimage from the NN

2012-03-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239426#comment-13239426
 ] 

Hudson commented on HDFS-2941:
--

Integrated in Hadoop-Hdfs-trunk #997 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/997/])
Move CHANGES.txt entry for HDFS-2941 to the right version. (Revision 
1305453)
HDFS-2941. Add an administrative command to download a copy of the fsimage from 
the NN. Contributed by Aaron T. Myers. (Revision 1305447)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305453
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt

atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305447
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFetchImage.java


 Add an administrative command to download a copy of the fsimage from the NN
 ---

 Key: HDFS-2941
 URL: https://issues.apache.org/jira/browse/HDFS-2941
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs client, name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 0.23.3

 Attachments: HDFS-2941.patch, HDFS-2941.patch, HDFS-2941.patch, 
 HDFS-2941.patch


 It would be nice to be able to download a copy of the fsimage from the NN, 
 e.g. for backup purposes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3125) Add a service that enables JournalDaemon

2012-03-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239430#comment-13239430
 ] 

Hudson commented on HDFS-3125:
--

Integrated in Hadoop-Hdfs-trunk #997 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/997/])
HDFS-3125. Add JournalService to enable Journal Daemon. Contributed by 
Suresh Srinivas. (Revision 1305726)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305726
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/JournalService.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestJournalService.java


 Add a service that enables JournalDaemon
 

 Key: HDFS-3125
 URL: https://issues.apache.org/jira/browse/HDFS-3125
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-3125.patch, HDFS-3125.patch, HDFS-3125.patch, 
 HDFS-3125.patch


 In this subtask, I plan to add JournalService. It will provide the following 
 functionality:
 # Starts RPC server with JournalProtocolService or uses the RPC server 
 provided and add JournalProtocol service. 
 # Registers with the namenode.
 # Receives JournalProtocol related requests and hands it to over to a 
 listener.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3131) Improve TestStorageRestore

2012-03-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239432#comment-13239432
 ] 

Hudson commented on HDFS-3131:
--

Integrated in Hadoop-Hdfs-trunk #997 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/997/])
HDFS-3131. Improve TestStorageRestore. Contributed by Brandon Li. (Revision 
1305688)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305688
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java


 Improve TestStorageRestore
 --

 Key: HDFS-3131
 URL: https://issues.apache.org/jira/browse/HDFS-3131
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.24.0, 1.1.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Brandon Li
Priority: Minor
  Labels: newbie
 Fix For: 0.24.0, 1.1.0

 Attachments: HDFS-3131.branch-1.patch, HDFS-3131.patch, 
 HDFS-3131.patch


 Aaron has the following comments on TestStorageRestore in HDFS-3127.
 # removeStorageAccess, restoreAccess, and numStorageDirs can all be made 
 private
 # numStorageDirs can be made static
 # Rather than do set(Readable/Executable/Writable), use FileUtil.chmod(...).
 # Please put the contents of the test in a try/finally, with the calls to 
 shutdown the cluster and the 2NN in the finally block.
 # Some lines are over 80 chars.
 # No need for the numDatanodes variable - it's only used in one place.
 # Instead of xwr use rwx, which I think is a more common way of 
 describing permissions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3129) NetworkTopology: add test that getLeaf should check for invalid topologies

2012-03-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239468#comment-13239468
 ] 

Hudson commented on HDFS-3129:
--

Integrated in Hadoop-Mapreduce-trunk #1032 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1032/])
Fix CHANGES.txt for HDFS-3129 (Revision 1305631)
HDFS-3129. NetworkTopology: add test that getLeaf should check for invalid 
topologies. Contributed by Colin Patrick McCabe (Revision 1305628)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305631
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305628
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java


 NetworkTopology: add test that getLeaf should check for invalid topologies
 --

 Key: HDFS-3129
 URL: https://issues.apache.org/jira/browse/HDFS-3129
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 1.1.0, 0.23.3

 Attachments: HDFS-3129-b1.001.patch, HDFS-3129.001.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3089) Move FSDatasetInterface and other related classes/interfaces to a package

2012-03-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239471#comment-13239471
 ] 

Hudson commented on HDFS-3089:
--

Integrated in Hadoop-Mapreduce-trunk #1032 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1032/])
(recommit) HDFS-3089. Move FSDatasetInterface and the related classes to a 
package. (Revision 1305603)
Revert 1305590 for HDFS-3089. (Revision 1305598)
HDFS-3089. Move FSDatasetInterface and the related classes to a package. 
(Revision 1305590)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305603
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockVolumeChoosingPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDatasetInterface.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/RollingLogs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/RoundRobinVolumesPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/RollingLogs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/RoundRobinVolumeChoosingPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/VolumeChoosingPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DataNodeCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestRoundRobinVolumesPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/TestRoundRobinVolumeChoosingPolicy.java

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305598
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockVolumeChoosingPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 

[jira] [Commented] (HDFS-2941) Add an administrative command to download a copy of the fsimage from the NN

2012-03-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239473#comment-13239473
 ] 

Hudson commented on HDFS-2941:
--

Integrated in Hadoop-Mapreduce-trunk #1032 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1032/])
Move CHANGES.txt entry for HDFS-2941 to the right version. (Revision 
1305453)
HDFS-2941. Add an administrative command to download a copy of the fsimage from 
the NN. Contributed by Aaron T. Myers. (Revision 1305447)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305453
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt

atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305447
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFetchImage.java


 Add an administrative command to download a copy of the fsimage from the NN
 ---

 Key: HDFS-2941
 URL: https://issues.apache.org/jira/browse/HDFS-2941
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs client, name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 0.23.3

 Attachments: HDFS-2941.patch, HDFS-2941.patch, HDFS-2941.patch, 
 HDFS-2941.patch


 It would be nice to be able to download a copy of the fsimage from the NN, 
 e.g. for backup purposes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3125) Add a service that enables JournalDaemon

2012-03-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239477#comment-13239477
 ] 

Hudson commented on HDFS-3125:
--

Integrated in Hadoop-Mapreduce-trunk #1032 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1032/])
HDFS-3125. Add JournalService to enable Journal Daemon. Contributed by 
Suresh Srinivas. (Revision 1305726)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305726
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/JournalService.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestJournalService.java


 Add a service that enables JournalDaemon
 

 Key: HDFS-3125
 URL: https://issues.apache.org/jira/browse/HDFS-3125
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-3125.patch, HDFS-3125.patch, HDFS-3125.patch, 
 HDFS-3125.patch


 In this subtask, I plan to add JournalService. It will provide the following 
 functionality:
 # Starts RPC server with JournalProtocolService or uses the RPC server 
 provided and add JournalProtocol service. 
 # Registers with the namenode.
 # Receives JournalProtocol related requests and hands it to over to a 
 listener.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3131) Improve TestStorageRestore

2012-03-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239479#comment-13239479
 ] 

Hudson commented on HDFS-3131:
--

Integrated in Hadoop-Mapreduce-trunk #1032 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1032/])
HDFS-3131. Improve TestStorageRestore. Contributed by Brandon Li. (Revision 
1305688)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305688
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java


 Improve TestStorageRestore
 --

 Key: HDFS-3131
 URL: https://issues.apache.org/jira/browse/HDFS-3131
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.24.0, 1.1.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Brandon Li
Priority: Minor
  Labels: newbie
 Fix For: 0.24.0, 1.1.0

 Attachments: HDFS-3131.branch-1.patch, HDFS-3131.patch, 
 HDFS-3131.patch


 Aaron has the following comments on TestStorageRestore in HDFS-3127.
 # removeStorageAccess, restoreAccess, and numStorageDirs can all be made 
 private
 # numStorageDirs can be made static
 # Rather than do set(Readable/Executable/Writable), use FileUtil.chmod(...).
 # Please put the contents of the test in a try/finally, with the calls to 
 shutdown the cluster and the 2NN in the finally block.
 # Some lines are over 80 chars.
 # No need for the numDatanodes variable - it's only used in one place.
 # Instead of xwr use rwx, which I think is a more common way of 
 describing permissions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2413) Add public APIs for safemode

2012-03-27 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2413:
--

Target Version/s:   (was: 0.23.2, 0.24.0)

 Add public APIs for safemode
 

 Key: HDFS-2413
 URL: https://issues.apache.org/jira/browse/HDFS-2413
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Harsh J
 Fix For: 0.24.0, 0.23.3

 Attachments: HDFS-2413.patch, HDFS-2413.patch, HDFS-2413.patch, 
 HDFS-2413.patch


 Currently the APIs for safe-mode are part of DistributedFileSystem, which is 
 supposed to be a private interface. However, dependent software often wants 
 to wait until the NN is out of safemode. Though it could poll trying to 
 create a file and catching SafeModeException, we should consider making some 
 of these APIs public.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3154) Add a notion of immutable/mutable files

2012-03-27 Thread Daryn Sharp (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239539#comment-13239539
 ] 

Daryn Sharp commented on HDFS-3154:
---

bq. Users have to pass immutable/mutable as a flag in file creation. This is an 
unmodifiable property of the created file.
I think the mutability of a file should be changeable at any time, not just 
during creation.  Ie. unix has a chflags command.

bq. You should be able to make a copy of an immutable file that takes up no 
extra space, but can be appended to or truncated. The immutable file and the 
copy would share immutable blocks.
I really like this idea!  Awhile back I was proposing (offline discussion) COW 
copies but there were questions about whether there is a valid use case.  
Modifying an immutable file would be such a use case.  It probably doesn't make 
sense to copy a very large file (client has to stream the data down and back 
up) just because the user wants to append a little bit to an immutable file.

Given the lack of random writes, it should be relatively easy to handle append 
to the final block.  Either the final block could be re-replicated for an 
append, or the original file can remember the length of its last block so the 
block will continue to be shared between the original file and its copy.  The 
original file would need to re-replicate the block if it needs appending and 
the block is larger than what it thinks it should be -- ie. it's already been 
appended.  That gets tricky, so simply re-replicating the final COW block when 
appended would be the easiest.

Currently the block manager requires a 1-to-1 block to inode association.  That 
would have to be changed to 1-to-many, or a COW block would provide indirection 
to the real block.  I think the latter would be tricky unless real blocks 
contain a reference count.


 Add a notion of immutable/mutable files
 ---

 Key: HDFS-3154
 URL: https://issues.apache.org/jira/browse/HDFS-3154
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE

 The notion of immutable file is useful since it lets the system and tools 
 optimize certain things as discussed in [this email 
 thread|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201203.mbox/%3CCAPn_vTuZomPmBTypP8_1xTr49Sj0fy7Mjhik4DbcAA+BLH53=g...@mail.gmail.com%3E].
   Also, many applications require only immutable files.  Here is a proposal:
 - Immutable files means that the file content is immutable.  Operations such 
 as append and truncate that change the file content are not allowed to act on 
 immutable files.  However, the meta data such as replication and permission 
 of an immutable file can be updated.  Immutable files can also be deleted or 
 renamed.
 - Users have to pass immutable/mutable as a flag in file creation.  This is 
 an unmodifiable property of the created file.
 - If users want to change the data in an immutable file, the file could be 
 copied to another file which is created as mutable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3153) For HA, a logical name is visible in URIs - add an explicit logical name

2012-03-27 Thread Sanjay Radia (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239575#comment-13239575
 ] 

Sanjay Radia commented on HDFS-3153:


See [comment | 
https://issues.apache.org/jira/browse/HDFS-2839?focusedCommentId=13227729page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13227729]
 that discusses the logical name.

I would like to  call the term Namespace volume. As part of federation, the 
plan was treat namespaces (or namespace  volumes) in generic way. The namespace 
volume name is the logical name that is registered with DNS for accessing that 
namespace.

 For HA, a logical name is visible in URIs - add an explicit logical name
 

 Key: HDFS-3153
 URL: https://issues.apache.org/jira/browse/HDFS-3153
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Sanjay Radia



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3121) hdfs tests for HADOOP-8014

2012-03-27 Thread John George (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HDFS-3121:
--

Status: Open  (was: Patch Available)

 hdfs tests for HADOOP-8014
 --

 Key: HDFS-3121
 URL: https://issues.apache.org/jira/browse/HDFS-3121
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.2, 0.23.3
Reporter: John George
Assignee: John George
 Attachments: hdfs-3121.patch


 This JIRA is to write tests for viewing quota using viewfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3121) hdfs tests for HADOOP-8014

2012-03-27 Thread John George (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HDFS-3121:
--

Summary: hdfs tests for HADOOP-8014  (was: test for HADOOP-8194 (quota 
using viewfs))

 hdfs tests for HADOOP-8014
 --

 Key: HDFS-3121
 URL: https://issues.apache.org/jira/browse/HDFS-3121
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.2, 0.23.3
Reporter: John George
Assignee: John George
 Attachments: hdfs-3121.patch


 This JIRA is to write tests for viewing quota using viewfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3154) Add a notion of immutable/mutable files

2012-03-27 Thread Colin Patrick McCabe (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239728#comment-13239728
 ] 

Colin Patrick McCabe commented on HDFS-3154:


 The main benefit is caching.

Caching has nothing to do with whether files are immutable.  For example, Ceph 
has extensive client-side caching, but not immutable files.

HDFS could actually implement client-side caching very easily.  The reason is 
because we don't make the consistency guaranatees that filesystems with 
stronger semantics do.  It is those semantic guarantees that make caching 
difficult and complex to implement, as well as often inefficient.

 Another benefit is to protect the files. It avoid accidentally 
 append/truncate on immutable files.

You can already do this.  Create two users both in the users group.  Have the 
files owned by user #1 and put them in the users group.  Then use mode 0640.  
Then user #2 can read the files, but not write them.

Let's not reinvent the wheel.  Reinvented wheels tend to come out square, in my 
experience.

 Add a notion of immutable/mutable files
 ---

 Key: HDFS-3154
 URL: https://issues.apache.org/jira/browse/HDFS-3154
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE

 The notion of immutable file is useful since it lets the system and tools 
 optimize certain things as discussed in [this email 
 thread|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201203.mbox/%3CCAPn_vTuZomPmBTypP8_1xTr49Sj0fy7Mjhik4DbcAA+BLH53=g...@mail.gmail.com%3E].
   Also, many applications require only immutable files.  Here is a proposal:
 - Immutable files means that the file content is immutable.  Operations such 
 as append and truncate that change the file content are not allowed to act on 
 immutable files.  However, the meta data such as replication and permission 
 of an immutable file can be updated.  Immutable files can also be deleted or 
 renamed.
 - Users have to pass immutable/mutable as a flag in file creation.  This is 
 an unmodifiable property of the created file.
 - If users want to change the data in an immutable file, the file could be 
 copied to another file which is created as mutable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-27 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239737#comment-13239737
 ] 

Uma Maheswara Rao G commented on HDFS-3119:
---

Actual problem is, we set the replication factor down from 2 to 1 and close the 
file.

if complete call success with min replication factor 1 and after this only 
other DN's addStored blocks request comes, then that call can process the 
OverReplicated blocks. Because file might have moved already from 
FileUnderConstruction to finalized.

The other case is, if the complete call success with 2 addStored blocks 
immediately before moving fileInodeUnderConstruction to finalized one, then no 
one will be there to process the overreplicated blocks.

I feel the solution for this problem should be that, we have to add 
overreplicated check in BlockManager#checkReplication method. This will be 
called on complete file.

current code is checking only neededReplications.
{code}
 public void checkReplication(Block block, int numExpectedReplicas) {
// filter out containingNodes that are marked for decommission.
NumberReplicas number = countNodes(block);
if (isNeededReplication(block, numExpectedReplicas, number.liveReplicas())) 
{ 
  neededReplications.add(block,
 number.liveReplicas(),
 number.decommissionedReplicas(),
 numExpectedReplicas);
}
  }
{code}

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Priority: Minor
 Fix For: 0.24.0, 0.23.2


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3154) Add a notion of immutable/mutable files

2012-03-27 Thread Hari Mankude (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239790#comment-13239790
 ] 

Hari Mankude commented on HDFS-3154:



bq. You can already do this. Create two users both in the users group. Have 
the files owned by user #1 and put them in the users group. Then use mode 
0640. Then user #2 can read the files, but not write them.

Immutable implies that file cannot be appended/truncated by anybody after the 
file is finalized. I don't think permissions would be enough to guarantee 
immutability. Generally guarantees of immutability has use cases in the 
legal/SEC environments.



 Add a notion of immutable/mutable files
 ---

 Key: HDFS-3154
 URL: https://issues.apache.org/jira/browse/HDFS-3154
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE

 The notion of immutable file is useful since it lets the system and tools 
 optimize certain things as discussed in [this email 
 thread|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201203.mbox/%3CCAPn_vTuZomPmBTypP8_1xTr49Sj0fy7Mjhik4DbcAA+BLH53=g...@mail.gmail.com%3E].
   Also, many applications require only immutable files.  Here is a proposal:
 - Immutable files means that the file content is immutable.  Operations such 
 as append and truncate that change the file content are not allowed to act on 
 immutable files.  However, the meta data such as replication and permission 
 of an immutable file can be updated.  Immutable files can also be deleted or 
 renamed.
 - Users have to pass immutable/mutable as a flag in file creation.  This is 
 an unmodifiable property of the created file.
 - If users want to change the data in an immutable file, the file could be 
 copied to another file which is created as mutable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-234) Integration with BookKeeper logging system

2012-03-27 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239791#comment-13239791
 ] 

Uma Maheswara Rao G commented on HDFS-234:
--

Looks this particular issue, did not merged to HA branch ( but marked fix 
versions to HA branch also). 
It was committed only to trunk. So, 23 branch missing this code.

We have to merge this 23 branch for HA release right.( as this is also one 
option in shared storage).?

@Aaron, are we planning this issue merge to 23?

 Integration with BookKeeper logging system
 --

 Key: HDFS-234
 URL: https://issues.apache.org/jira/browse/HDFS-234
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Luca Telloli
Assignee: Ivan Kelly
 Fix For: HA branch (HDFS-1623), 0.24.0

 Attachments: HADOOP-5189-trunk-preview.patch, 
 HADOOP-5189-trunk-preview.patch, HADOOP-5189-trunk-preview.patch, 
 HADOOP-5189-v.19.patch, HADOOP-5189.patch, HDFS-234.diff, HDFS-234.diff, 
 HDFS-234.diff, HDFS-234.diff, HDFS-234.diff, HDFS-234.patch, create.png, 
 hdfs_tpt_lat.pdf, zookeeper-dev-bookkeeper.jar, zookeeper-dev.jar


 BookKeeper is a system to reliably log streams of records 
 (https://issues.apache.org/jira/browse/ZOOKEEPER-276). The NameNode is a 
 natural target for such a system for being the metadata repository of the 
 entire file system for HDFS. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3121) hdfs tests for HADOOP-8014

2012-03-27 Thread John George (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HDFS-3121:
--

Attachment: hdfs-3121.patch

 hdfs tests for HADOOP-8014
 --

 Key: HDFS-3121
 URL: https://issues.apache.org/jira/browse/HDFS-3121
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.2, 0.23.3
Reporter: John George
Assignee: John George
 Attachments: hdfs-3121.patch, hdfs-3121.patch


 This JIRA is to write tests for viewing quota using viewfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3154) Add a notion of immutable/mutable files

2012-03-27 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239847#comment-13239847
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3154:
--

 Caching has nothing to do with whether files are immutable. ...

Do you think that caching would be more efficient if the files are immutable?

 Add a notion of immutable/mutable files
 ---

 Key: HDFS-3154
 URL: https://issues.apache.org/jira/browse/HDFS-3154
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE

 The notion of immutable file is useful since it lets the system and tools 
 optimize certain things as discussed in [this email 
 thread|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201203.mbox/%3CCAPn_vTuZomPmBTypP8_1xTr49Sj0fy7Mjhik4DbcAA+BLH53=g...@mail.gmail.com%3E].
   Also, many applications require only immutable files.  Here is a proposal:
 - Immutable files means that the file content is immutable.  Operations such 
 as append and truncate that change the file content are not allowed to act on 
 immutable files.  However, the meta data such as replication and permission 
 of an immutable file can be updated.  Immutable files can also be deleted or 
 renamed.
 - Users have to pass immutable/mutable as a flag in file creation.  This is 
 an unmodifiable property of the created file.
 - If users want to change the data in an immutable file, the file could be 
 copied to another file which is created as mutable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3121) hdfs tests for HADOOP-8014

2012-03-27 Thread John George (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HDFS-3121:
--

Status: Patch Available  (was: Open)

 hdfs tests for HADOOP-8014
 --

 Key: HDFS-3121
 URL: https://issues.apache.org/jira/browse/HDFS-3121
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.2, 0.23.3
Reporter: John George
Assignee: John George
 Attachments: hdfs-3121.patch, hdfs-3121.patch


 This JIRA is to write tests for viewing quota using viewfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-27 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239862#comment-13239862
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3119:
--

Hi J.Andreina,

Have you waited some time between step2 and fsck?  I want ask if the block 
remains over-replicated forever.  Noted deleting replicas from over-replicated 
blocks is a low priority task in namenode.

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Priority: Minor
 Fix For: 0.24.0, 0.23.2


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3142) TestHDFSCLI.testAll is failing

2012-03-27 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3142:
-

Component/s: test
   Assignee: Brandon Li

 TestHDFSCLI.testAll is failing
 --

 Key: HDFS-3142
 URL: https://issues.apache.org/jira/browse/HDFS-3142
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Eli Collins
Assignee: Brandon Li
Priority: Blocker

 TestHDFSCLI.testAll is failing in the latest trunk/23 builds. Last good build 
 was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3121) hdfs tests for HADOOP-8014

2012-03-27 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239934#comment-13239934
 ] 

Hadoop QA commented on HDFS-3121:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520178/hdfs-3121.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 4 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.cli.TestHDFSCLI
  org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
  org.apache.hadoop.hdfs.TestGetBlocks

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2105//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2105//console

This message is automatically generated.

 hdfs tests for HADOOP-8014
 --

 Key: HDFS-3121
 URL: https://issues.apache.org/jira/browse/HDFS-3121
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.2, 0.23.3
Reporter: John George
Assignee: John George
 Attachments: hdfs-3121.patch, hdfs-3121.patch


 This JIRA is to write tests for viewing quota using viewfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-27 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3143:
-

Component/s: test
   Assignee: Arpit Gupta

 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Eli Collins
Assignee: Arpit Gupta

 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2185) HA: HDFS portion of ZK-based FailoverController

2012-03-27 Thread Sanjay Radia (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239954#comment-13239954
 ] 

Sanjay Radia commented on HDFS-2185:


Todd, I am even okay with a scanned diagram of hand drawn state machine. Right 
now I can't get a full understanding of the design without more details.

 HA: HDFS portion of ZK-based FailoverController
 ---

 Key: HDFS-2185
 URL: https://issues.apache.org/jira/browse/HDFS-2185
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: 0.24.0, 0.23.3
Reporter: Eli Collins
Assignee: Todd Lipcon
 Attachments: Failover_Controller.jpg, hdfs-2185.txt, hdfs-2185.txt, 
 hdfs-2185.txt, hdfs-2185.txt, zkfc-design.pdf, zkfc-design.pdf, 
 zkfc-design.tex


 This jira is for a ZK-based FailoverController daemon. The FailoverController 
 is a separate daemon from the NN that does the following:
 * Initiates leader election (via ZK) when necessary
 * Performs health monitoring (aka failure detection)
 * Performs fail-over (standby to active and active to standby transitions)
 * Heartbeats to ensure the liveness
 It should have the same/similar interface as the Linux HA RM to aid 
 pluggability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2185) HA: HDFS portion of ZK-based FailoverController

2012-03-27 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239958#comment-13239958
 ] 

Todd Lipcon commented on HDFS-2185:
---

Hi Sanjay. There is a state machine in the PDF I uploaded yesterday. Please see 
section 2.6 of the latest attached PDF.

 HA: HDFS portion of ZK-based FailoverController
 ---

 Key: HDFS-2185
 URL: https://issues.apache.org/jira/browse/HDFS-2185
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: 0.24.0, 0.23.3
Reporter: Eli Collins
Assignee: Todd Lipcon
 Attachments: Failover_Controller.jpg, hdfs-2185.txt, hdfs-2185.txt, 
 hdfs-2185.txt, hdfs-2185.txt, zkfc-design.pdf, zkfc-design.pdf, 
 zkfc-design.tex


 This jira is for a ZK-based FailoverController daemon. The FailoverController 
 is a separate daemon from the NN that does the following:
 * Initiates leader election (via ZK) when necessary
 * Performs health monitoring (aka failure detection)
 * Performs fail-over (standby to active and active to standby transitions)
 * Heartbeats to ensure the liveness
 It should have the same/similar interface as the Linux HA RM to aid 
 pluggability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3094) add -nonInteractive and -force option to namenode -format command

2012-03-27 Thread Arpit Gupta (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13239963#comment-13239963
 ] 

Arpit Gupta commented on HDFS-3094:
---

Hey Todd the latest patch address all your earlier comments, could you please 
review it.

Thanks

 add -nonInteractive and -force option to namenode -format command
 -

 Key: HDFS-3094
 URL: https://issues.apache.org/jira/browse/HDFS-3094
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.24.0, 1.0.2
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, 
 HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, 
 HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, HDFS-3094.patch, 
 HDFS-3094.patch, HDFS-3094.patch, HDFS-3094.patch


 Currently the bin/hadoop namenode -format prompts the user for a Y/N to setup 
 the directories in the local file system.
 -force : namenode formats the directories without prompting
 -nonInterActive : namenode format will return with an exit code of 1 if the 
 dir exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2617) Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution

2012-03-27 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240010#comment-13240010
 ] 

Eli Collins commented on HDFS-2617:
---

In the meantime can you answer Alejandro's question wrt we we need to keep the 
SSL HTTP configuration?

 Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution
 --

 Key: HDFS-2617
 URL: https://issues.apache.org/jira/browse/HDFS-2617
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-2617-a.patch


 The current approach to secure and authenticate nn web services is based on 
 Kerberized SSL and was developed when a SPNEGO solution wasn't available. Now 
 that we have one, we can get rid of the non-standard KSSL and use SPNEGO 
 throughout.  This will simplify setup and configuration.  Also, Kerberized 
 SSL is a non-standard approach with its own quirks and dark corners 
 (HDFS-2386).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-27 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240101#comment-13240101
 ] 

Uma Maheswara Rao G commented on HDFS-3119:
---

Nicholas, thanks a lot for taking a look. I tried this case in my cluster with 
only one block. In this case anyway this block itself should have priority. 

For your question.
I don't see any OverReplicated processing from neededReplication priority 
Queues. We will just remove from needed replication queues. Am I missing?

{code}
if (numEffectiveReplicas = requiredReplication) {
  if ( (pendingReplications.getNumReplicas(block)  0) ||
   (blockHasEnoughRacks(block)) ) {
neededReplications.remove(block, priority); // remove from 
neededReplications
neededReplications.decrementReplicationIndex(priority);
NameNode.stateChangeLog.info(BLOCK* 
+ Removing block  + block
+  from neededReplications as it has enough replicas.);
continue;
  }
{code}

processOverReplications is happening straight away from addStoredBlock and 
setReplication calls. 


Anyway let's see what happened in Andreina's cluster.

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Priority: Minor
 Fix For: 0.24.0, 0.23.2


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3155) Clean up FSDataset implemenation related code.

2012-03-27 Thread Tsz Wo (Nicholas), SZE (Created) (JIRA)
Clean up FSDataset implemenation related code.
--

 Key: HDFS-3155
 URL: https://issues.apache.org/jira/browse/HDFS-3155
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3155) Clean up FSDataset implemenation related code.

2012-03-27 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3155:
-

Attachment: h3155_20120327.patch

h3155_20120327.patch
- combine DataNodeTestUtils and DataNodeAdapter;
- remove on file creation from DatanodeUtil.DISK_ERROR;
- remove throw IOException in some methods in DataStorage and BlockReceiver;
- change replica and replicaVisibleLength in BlockSender to local variables;
- remove ReplicaUnderRecovery.getOrignalReplicaState().

 Clean up FSDataset implemenation related code.
 --

 Key: HDFS-3155
 URL: https://issues.apache.org/jira/browse/HDFS-3155
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3155_20120327.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2617) Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution

2012-03-27 Thread Jakob Homan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240104#comment-13240104
 ] 

Jakob Homan commented on HDFS-2617:
---

bq. In the meantime can you answer Alejandro's question wrt we we need to keep 
the SSL HTTP configuration?
We don't.  As described in the comments above, the posted patch is what we've 
deployed here, not the final version to be committed.  Removing SSL is a fine 
thing to do, when I finish the main patch.

 Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution
 --

 Key: HDFS-2617
 URL: https://issues.apache.org/jira/browse/HDFS-2617
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-2617-a.patch


 The current approach to secure and authenticate nn web services is based on 
 Kerberized SSL and was developed when a SPNEGO solution wasn't available. Now 
 that we have one, we can get rid of the non-standard KSSL and use SPNEGO 
 throughout.  This will simplify setup and configuration.  Also, Kerberized 
 SSL is a non-standard approach with its own quirks and dark corners 
 (HDFS-2386).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3155) Clean up FSDataset implemenation related code.

2012-03-27 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3155:
-

Status: Patch Available  (was: Open)

 Clean up FSDataset implemenation related code.
 --

 Key: HDFS-3155
 URL: https://issues.apache.org/jira/browse/HDFS-3155
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3155_20120327.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-27 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240107#comment-13240107
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3119:
--

Hi Uma, I see your point.  I guess the replicas of over-replicated block will 
be deleted after the next block reports.  If it is the case, then we don't have 
to do anything.

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Priority: Minor
 Fix For: 0.24.0, 0.23.2


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-27 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240113#comment-13240113
 ] 

Uma Maheswara Rao G commented on HDFS-3119:
---

But if the block already updated with target datanodes then we need not re-add 
the block from block report right.
In this case block already updated in blocks map with locations. So, we need 
not select this block for toAdd list. So, call will not go to addStoredBlock 
again.

{code}
//add replica if appropriate
if (reportedState == ReplicaState.FINALIZED
 storedBlock.findDatanode(dn)  0) {
  toAdd.add(storedBlock);
}
{code}

I am just referring reportDiff flow here. Please correct me if I miss something 
here.

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Priority: Minor
 Fix For: 0.24.0, 0.23.2


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2617) Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution

2012-03-27 Thread Jakob Homan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240127#comment-13240127
 ] 

Jakob Homan commented on HDFS-2617:
---

In terms of when I can get the 23 patch done, I've scheduled time next week.  
This isn't worth holding up any point release of 23 as it's an improvement 
rather than a bug fix; don't bother doing so.

 Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution
 --

 Key: HDFS-2617
 URL: https://issues.apache.org/jira/browse/HDFS-2617
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jakob Homan
Assignee: Jakob Homan
 Attachments: HDFS-2617-a.patch


 The current approach to secure and authenticate nn web services is based on 
 Kerberized SSL and was developed when a SPNEGO solution wasn't available. Now 
 that we have one, we can get rid of the non-standard KSSL and use SPNEGO 
 throughout.  This will simplify setup and configuration.  Also, Kerberized 
 SSL is a non-standard approach with its own quirks and dark corners 
 (HDFS-2386).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3155) Clean up FSDataset implemenation related code.

2012-03-27 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240134#comment-13240134
 ] 

Hadoop QA commented on HDFS-3155:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520222/h3155_20120327.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 37 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
  org.apache.hadoop.hdfs.TestGetBlocks
  org.apache.hadoop.cli.TestHDFSCLI

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2106//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2106//console

This message is automatically generated.

 Clean up FSDataset implemenation related code.
 --

 Key: HDFS-3155
 URL: https://issues.apache.org/jira/browse/HDFS-3155
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3155_20120327.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-27 Thread Arpit Gupta (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HDFS-3143:
--

Attachment: HDFS-3143.patch

changed the assert to compare the class name rather than the comparing the 
message.

 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Eli Collins
Assignee: Arpit Gupta
 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-27 Thread Arpit Gupta (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HDFS-3143:
--

Target Version/s: 0.24.0, 0.23.3  (was: 0.23.3)
  Status: Patch Available  (was: Open)

 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Eli Collins
Assignee: Arpit Gupta
 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3155) Clean up FSDataset implemenation related code.

2012-03-27 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240167#comment-13240167
 ] 

Suresh Srinivas commented on HDFS-3155:
---

Minor comment - weird formatting in DataStorage.java. Other than that patch 
looks good. +1.

 Clean up FSDataset implemenation related code.
 --

 Key: HDFS-3155
 URL: https://issues.apache.org/jira/browse/HDFS-3155
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3155_20120327.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-27 Thread J.Andreina (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240174#comment-13240174
 ] 

J.Andreina commented on HDFS-3119:
--

Hi,
  Even after the datanode has sent the block report many times, the 
overreplicated block is not deleted.
Even if i execute fsck for that particular file after sometime the block 
remains overreplicated.

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Priority: Minor
 Fix For: 0.24.0, 0.23.2


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3134) harden edit log loader against malformed or malicious input

2012-03-27 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240181#comment-13240181
 ] 

Suresh Srinivas commented on HDFS-3134:
---

bq. It's clear that we want these exceptions to be thrown as IOException 
instead of as unchecked exceptions. We also want to avoid out of memory 
situations.
From which methods?

Unchecked exceptions indicate programming errors. Blindly turning them into 
checked exceptions is not a good idea (as you say so in some of your comments). 
I am not sure which part of the code you are talking about.

 harden edit log loader against malformed or malicious input
 ---

 Key: HDFS-3134
 URL: https://issues.apache.org/jira/browse/HDFS-3134
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe

 Currently, the edit log loader does not handle bad or malicious input 
 sensibly.
 We can often cause OutOfMemory exceptions, null pointer exceptions, or other 
 unchecked exceptions to be thrown by feeding the edit log loader bad input.  
 In some environments, an out of memory error can cause the JVM process to be 
 terminated.
 It's clear that we want these exceptions to be thrown as IOException instead 
 of as unchecked exceptions.  We also want to avoid out of memory situations.
 The main task here is to put a sensible upper limit on the lengths of arrays 
 and strings we allocate on command.  The other task is to try to avoid 
 creating unchecked exceptions (by dereferencing potentially-NULL pointers, 
 for example).  Instead, we should verify ahead of time and give a more 
 sensible error message that reflects the problem with the input.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-27 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240214#comment-13240214
 ] 

Hadoop QA commented on HDFS-3143:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520229/HDFS-3143.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.server.common.TestDistributedUpgrade
  org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
  org.apache.hadoop.cli.TestHDFSCLI

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2107//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2107//console

This message is automatically generated.

 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Eli Collins
Assignee: Arpit Gupta
 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira