[jira] [Commented] (HDFS-2994) If lease is recovered successfully inline with create, create can fail

2012-03-31 Thread VinayaKumar B (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243669#comment-13243669
 ] 

VinayaKumar B commented on HDFS-2994:
-

Able to Reproduce till recoverLease releases the lease because of All Blocks 
COMPLETE, but not able to Reproduce 
*replaceNode* failure.

Scenario may be like this.

1. Client completed the writing the lastpacket to pipeline and got Ack also.
2. Before DN report the finalized block, Client's first *completeFile* call 
reached NN and marked Block as COMPLETE, but lease not removed since 
minReplication not satisfied. Say now Client dead.
3. Now DNs reports blocks and same thing is updated in BlockMap.
4. Now recoverLease is called on same file. As part of this file is finalized 
and Lease is getting removed because of COMPLETE blocks.
5. Now append is also called on the same file. 

In the Issue case, append is getting failed, because of *replaceNode* failure.
But, when tried to reproduce, append is successfully reopening the stream.

> If lease is recovered successfully inline with create, create can fail
> --
>
> Key: HDFS-2994
> URL: https://issues.apache.org/jira/browse/HDFS-2994
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.24.0
>Reporter: Todd Lipcon
>
> I saw the following logs on my test cluster:
> {code}
> 2012-02-22 14:35:22,887 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: startFile: recover lease 
> [Lease.  Holder: DFSClient_attempt_1329943893604_0007_m_000376_0_453973131_1, 
> pendingcreates: 1], src=/benchmarks/TestDFSIO/io_data/test_io_6 from client 
> DFSClient_attempt_1329943893604_0007_m_000376_0_453973131_1
> 2012-02-22 14:35:22,887 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering lease=[Lease. 
>  Holder: DFSClient_attempt_1329943893604_0007_m_000376_0_453973131_1, 
> pendingcreates: 1], src=/benchmarks/TestDFSIO/io_data/test_io_6
> 2012-02-22 14:35:22,888 WARN org.apache.hadoop.hdfs.StateChange: BLOCK* 
> internalReleaseLease: All existing blocks are COMPLETE, lease removed, file 
> closed.
> 2012-02-22 14:35:22,888 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> FSDirectory.replaceNode: failed to remove 
> /benchmarks/TestDFSIO/io_data/test_io_6
> 2012-02-22 14:35:22,888 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: FSDirectory.replaceNode: failed to remove 
> /benchmarks/TestDFSIO/io_data/test_io_6
> {code}
> It seems like, if {{recoverLeaseInternal}} succeeds in {{startFileInternal}}, 
> then the INode will be replaced with a new one, meaning the later 
> {{replaceNode}} call can fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3130) Move FSDataset implemenation to a package

2012-03-31 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243666#comment-13243666
 ] 

Uma Maheswara Rao G commented on HDFS-3130:
---

+1 for the latest patch. Looks good to me.

> Move FSDataset implemenation to a package
> -
>
> Key: HDFS-3130
> URL: https://issues.apache.org/jira/browse/HDFS-3130
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: data-node
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3130_20120328_svn_mv.patch, h3130_20120329b.patch, 
> h3130_20120329b_svn_mv.patch, h3130_20120330.patch, 
> h3130_20120330_svn_mv.patch, svn_mv.sh, svn_mv.sh
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2656) Implement a pure c client based on webhdfs

2012-03-31 Thread Zhanwei.Wang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243664#comment-13243664
 ] 

Zhanwei.Wang commented on HDFS-2656:



Hi donal, 
Good question, performance is an important issue and the lib needs to be 
designed and implemented carefully.

>From lib side, I use libcurl to deal with http protocol and a buffer in the 
>lib to optimize the performance. The same design was also used in our another 
>project and the performance of libcurl is ok.

For the transmission, http use tcp connection. To read data from server, only 
the raw data is transfered. To write to server, I use "chunked" transfer 
encoding, and the overhead is just a small head per chunk.

For the server side, the performance is depending on the jetty server. In the 
previous prototype, jetty server or webhdfs had performance problem when I use 
HTTP1.1 protocol to read data from server, but this problem cannot reproduce 
when I switch to HTTP1.0 protocol. 

I did simple performance test on the previous prototype, and more performance 
test work is on the plan.

Currently, to write to hdfs may still fail under the heavy workload, I am not 
sure it is a bug of my code or the hdfs, I am working on it (seems not my bug 
-_-). The doc is under writing, function test is finished. As soon as I get the 
permit to open source and finish the doc, you can test yourself. I think it 
will not take too long.


> Implement a pure c client based on webhdfs
> --
>
> Key: HDFS-2656
> URL: https://issues.apache.org/jira/browse/HDFS-2656
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Zhanwei.Wang
>
> Currently, the implementation of libhdfs is based on JNI. The overhead of JVM 
> seems a little big, and libhdfs can also not be used in the environment 
> without hdfs.
> It seems a good idea to implement a pure c client by wrapping webhdfs. It 
> also can be used to access different version of hdfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3144) Refactor DatanodeID#getName by use

2012-03-31 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243654#comment-13243654
 ] 

Hadoop QA commented on HDFS-3144:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520811/hdfs-3144.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 96 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The patch appears to cause tar ant target to fail.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to cause Findbugs (version 1.3.9) to fail.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed the unit tests build

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2146//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2146//console

This message is automatically generated.

> Refactor DatanodeID#getName by use
> --
>
> Key: HDFS-3144
> URL: https://issues.apache.org/jira/browse/HDFS-3144
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3144.txt, hdfs-3144.txt
>
>
> DatanodeID#getName, which returns a string containing the IP:xferPort of a 
> Datanode, is used in a variety of contexts:
> # Putting the ID in a log message
> # Connecting to the DN for data transfer
> # Getting a string to use as a key (eg for comparison)
> # Using as a hostname, eg for excludes/includes, topology files
> Same for DatanodeID#getHost, which returns just the IP part, and sometimes we 
> use it as a key, sometimes we tack on the IPC port, etc.
> Let's have a method for each use, eg toString can be used for #1, a new 
> method (eg getDataXferAddr) for #2, a new method (eg getKey) for #3, new 
> method (eg getHostID) for #4, etc. Aside from the code being more clear, we 
> can change the value for particular uses, eg we can change the format in a 
> log message without changing the address used that clients connect to the DN, 
> or modify the address used for data transfer without changing the other uses.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3144) Refactor DatanodeID#getName by use

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3144:
--

Attachment: hdfs-3144.txt

Last patch had a variable rename that modified common. This time w/o that 
change.

> Refactor DatanodeID#getName by use
> --
>
> Key: HDFS-3144
> URL: https://issues.apache.org/jira/browse/HDFS-3144
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3144.txt, hdfs-3144.txt
>
>
> DatanodeID#getName, which returns a string containing the IP:xferPort of a 
> Datanode, is used in a variety of contexts:
> # Putting the ID in a log message
> # Connecting to the DN for data transfer
> # Getting a string to use as a key (eg for comparison)
> # Using as a hostname, eg for excludes/includes, topology files
> Same for DatanodeID#getHost, which returns just the IP part, and sometimes we 
> use it as a key, sometimes we tack on the IPC port, etc.
> Let's have a method for each use, eg toString can be used for #1, a new 
> method (eg getDataXferAddr) for #2, a new method (eg getKey) for #3, new 
> method (eg getHostID) for #4, etc. Aside from the code being more clear, we 
> can change the value for particular uses, eg we can change the format in a 
> log message without changing the address used that clients connect to the DN, 
> or modify the address used for data transfer without changing the other uses.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3171) The DatanodeID "name" field is overloaded

2012-03-31 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243651#comment-13243651
 ] 

Hudson commented on HDFS-3171:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1974 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1974/])
HDFS-3171. The DatanodeID "name" field is overloaded. Contributed by Eli 
Collins (Revision 1308014)

 Result = ABORTED
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308014
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/UpgradeManagerDatanode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeRegistration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSAddressConfig.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestIsMethodSupported.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDeleteBlockPool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java


> The DatanodeID "name" field is overloaded 
> --
>
> Key: HDFS-3171
> URL: https://issues.apache.org/jira/browse/HDFS-3171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.0
>
> Attachments: hdfs-3171.txt
>
>
> The DatanodeID "name" field is currently overloaded, when the DN creates a 
> DatanodeID to register with the NN it sets "name" to be the datanode 
> hostname, which is the DN's "hostName" member. This isnot necesarily a FQDN, 
> it is either set explicitly or determined by the DNS class, which could 
> return the machine's hostname or the result of a DNS lookup, if configured to 
> do so. The NN then clobbers the "name" field of the DatanodeID with the IP 
> part of the new DatanodeID "name" field it creates (and sets the DatanodeID 
> "hostName" field to the reported "name"). The DN gets the DatanodeID back 
> from the NN and clobbers its "hostName" member with the "name" field of the 
> returned DatanodeID. This makes the code hard to reason about eg 
> DN#getMachine name sometimes returns a hostname and sometimes not, depending 
> on when it's called in sequence with the registration. Ditto for uses of the 
> "name" field. I think these contortions were originally perf

[jira] [Commented] (HDFS-3144) Refactor DatanodeID#getName by use

2012-03-31 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243650#comment-13243650
 ] 

Hadoop QA commented on HDFS-3144:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520810/hdfs-3144.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 99 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2145//console

This message is automatically generated.

> Refactor DatanodeID#getName by use
> --
>
> Key: HDFS-3144
> URL: https://issues.apache.org/jira/browse/HDFS-3144
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3144.txt
>
>
> DatanodeID#getName, which returns a string containing the IP:xferPort of a 
> Datanode, is used in a variety of contexts:
> # Putting the ID in a log message
> # Connecting to the DN for data transfer
> # Getting a string to use as a key (eg for comparison)
> # Using as a hostname, eg for excludes/includes, topology files
> Same for DatanodeID#getHost, which returns just the IP part, and sometimes we 
> use it as a key, sometimes we tack on the IPC port, etc.
> Let's have a method for each use, eg toString can be used for #1, a new 
> method (eg getDataXferAddr) for #2, a new method (eg getKey) for #3, new 
> method (eg getHostID) for #4, etc. Aside from the code being more clear, we 
> can change the value for particular uses, eg we can change the format in a 
> log message without changing the address used that clients connect to the DN, 
> or modify the address used for data transfer without changing the other uses.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3144) Refactor DatanodeID#getName by use

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3144:
--

Attachment: hdfs-3144.txt

Patch attached.

DataNodeID#getName is no longer available. The following are introduced so each 
context in which we use the "name" has it's own method:
- getHostName - when the DN hostname is needed
- getIpAddr   - when the DN IP is needed
- getXferAddr - IP + xfer port (what getName returned)
- getIpcAddr  - IP + ipc port
- getInfoAddr - IP + info port
- toString- for logging

DataNodeInfo#getName still implements Node#getName for topolgy since DatanodeID 
doesn't need to implement this interface. It returns DatanodeID#getXferAddr so 
the behavior is unchanged.

> Refactor DatanodeID#getName by use
> --
>
> Key: HDFS-3144
> URL: https://issues.apache.org/jira/browse/HDFS-3144
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3144.txt
>
>
> DatanodeID#getName, which returns a string containing the IP:xferPort of a 
> Datanode, is used in a variety of contexts:
> # Putting the ID in a log message
> # Connecting to the DN for data transfer
> # Getting a string to use as a key (eg for comparison)
> # Using as a hostname, eg for excludes/includes, topology files
> Same for DatanodeID#getHost, which returns just the IP part, and sometimes we 
> use it as a key, sometimes we tack on the IPC port, etc.
> Let's have a method for each use, eg toString can be used for #1, a new 
> method (eg getDataXferAddr) for #2, a new method (eg getKey) for #3, new 
> method (eg getHostID) for #4, etc. Aside from the code being more clear, we 
> can change the value for particular uses, eg we can change the format in a 
> log message without changing the address used that clients connect to the DN, 
> or modify the address used for data transfer without changing the other uses.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3144) Refactor DatanodeID#getName by use

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3144:
--

Target Version/s: 2.0.0  (was: 0.23.3)
  Status: Patch Available  (was: Open)

> Refactor DatanodeID#getName by use
> --
>
> Key: HDFS-3144
> URL: https://issues.apache.org/jira/browse/HDFS-3144
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3144.txt
>
>
> DatanodeID#getName, which returns a string containing the IP:xferPort of a 
> Datanode, is used in a variety of contexts:
> # Putting the ID in a log message
> # Connecting to the DN for data transfer
> # Getting a string to use as a key (eg for comparison)
> # Using as a hostname, eg for excludes/includes, topology files
> Same for DatanodeID#getHost, which returns just the IP part, and sometimes we 
> use it as a key, sometimes we tack on the IPC port, etc.
> Let's have a method for each use, eg toString can be used for #1, a new 
> method (eg getDataXferAddr) for #2, a new method (eg getKey) for #3, new 
> method (eg getHostID) for #4, etc. Aside from the code being more clear, we 
> can change the value for particular uses, eg we can change the format in a 
> log message without changing the address used that clients connect to the DN, 
> or modify the address used for data transfer without changing the other uses.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3171) The DatanodeID "name" field is overloaded

2012-03-31 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243646#comment-13243646
 ] 

Hudson commented on HDFS-3171:
--

Integrated in Hadoop-Common-trunk-Commit #1961 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1961/])
HDFS-3171. The DatanodeID "name" field is overloaded. Contributed by Eli 
Collins (Revision 1308014)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308014
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/UpgradeManagerDatanode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeRegistration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSAddressConfig.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestIsMethodSupported.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDeleteBlockPool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java


> The DatanodeID "name" field is overloaded 
> --
>
> Key: HDFS-3171
> URL: https://issues.apache.org/jira/browse/HDFS-3171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.0
>
> Attachments: hdfs-3171.txt
>
>
> The DatanodeID "name" field is currently overloaded, when the DN creates a 
> DatanodeID to register with the NN it sets "name" to be the datanode 
> hostname, which is the DN's "hostName" member. This isnot necesarily a FQDN, 
> it is either set explicitly or determined by the DNS class, which could 
> return the machine's hostname or the result of a DNS lookup, if configured to 
> do so. The NN then clobbers the "name" field of the DatanodeID with the IP 
> part of the new DatanodeID "name" field it creates (and sets the DatanodeID 
> "hostName" field to the reported "name"). The DN gets the DatanodeID back 
> from the NN and clobbers its "hostName" member with the "name" field of the 
> returned DatanodeID. This makes the code hard to reason about eg 
> DN#getMachine name sometimes returns a hostname and sometimes not, depending 
> on when it's called in sequence with the registration. Ditto for uses of the 
> "name" field. I think these contortions were originally performed 

[jira] [Commented] (HDFS-3171) The DatanodeID "name" field is overloaded

2012-03-31 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243645#comment-13243645
 ] 

Hudson commented on HDFS-3171:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2036 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2036/])
HDFS-3171. The DatanodeID "name" field is overloaded. Contributed by Eli 
Collins (Revision 1308014)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1308014
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/UpgradeManagerDatanode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeRegistration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSAddressConfig.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestIsMethodSupported.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDeleteBlockPool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDiskError.java


> The DatanodeID "name" field is overloaded 
> --
>
> Key: HDFS-3171
> URL: https://issues.apache.org/jira/browse/HDFS-3171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.0
>
> Attachments: hdfs-3171.txt
>
>
> The DatanodeID "name" field is currently overloaded, when the DN creates a 
> DatanodeID to register with the NN it sets "name" to be the datanode 
> hostname, which is the DN's "hostName" member. This isnot necesarily a FQDN, 
> it is either set explicitly or determined by the DNS class, which could 
> return the machine's hostname or the result of a DNS lookup, if configured to 
> do so. The NN then clobbers the "name" field of the DatanodeID with the IP 
> part of the new DatanodeID "name" field it creates (and sets the DatanodeID 
> "hostName" field to the reported "name"). The DN gets the DatanodeID back 
> from the NN and clobbers its "hostName" member with the "name" field of the 
> returned DatanodeID. This makes the code hard to reason about eg 
> DN#getMachine name sometimes returns a hostname and sometimes not, depending 
> on when it's called in sequence with the registration. Ditto for uses of the 
> "name" field. I think these contortions were originally performed beca

[jira] [Updated] (HDFS-3171) The DatanodeID "name" field is overloaded

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3171:
--

  Resolution: Fixed
   Fix Version/s: 2.0.0
Target Version/s:   (was: 2.0.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've committed this and merged to branch-2.

> The DatanodeID "name" field is overloaded 
> --
>
> Key: HDFS-3171
> URL: https://issues.apache.org/jira/browse/HDFS-3171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.0
>
> Attachments: hdfs-3171.txt
>
>
> The DatanodeID "name" field is currently overloaded, when the DN creates a 
> DatanodeID to register with the NN it sets "name" to be the datanode 
> hostname, which is the DN's "hostName" member. This isnot necesarily a FQDN, 
> it is either set explicitly or determined by the DNS class, which could 
> return the machine's hostname or the result of a DNS lookup, if configured to 
> do so. The NN then clobbers the "name" field of the DatanodeID with the IP 
> part of the new DatanodeID "name" field it creates (and sets the DatanodeID 
> "hostName" field to the reported "name"). The DN gets the DatanodeID back 
> from the NN and clobbers its "hostName" member with the "name" field of the 
> returned DatanodeID. This makes the code hard to reason about eg 
> DN#getMachine name sometimes returns a hostname and sometimes not, depending 
> on when it's called in sequence with the registration. Ditto for uses of the 
> "name" field. I think these contortions were originally performed because the 
> DatanodeID didn't have a hostName field (it was part of DatanodeInfo) and so 
> there was no way to communicate both at the same time. Now that the hostName 
> field is in DatanodeID (as of HDFS-3164) we can establish the invariant that 
> the "name" field always and only has an IP address and the "hostName" field 
> always and only has a hostname.
> In HDFS-3144 I'm going to rename the "name" field so its clear that it 
> contains an IP address. The above is enough scope for one change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3171) The DatanodeID "name" field is overloaded

2012-03-31 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243640#comment-13243640
 ] 

Eli Collins commented on HDFS-3171:
---

Thanks ATM!

#1 Updated the comment to just indicate it's clobbering the IP, rather than 
comment wrt what I think the original motivation was. This behavior will change 
in HDFS-3146 (multiple IPs reported, we won't squash them)
#2 Updated the comment to be more clear (java's InetAddress#getLocalHost != 
"locahost")
#3 I'm going to punt this one to HDFS-3144, as in this patch there is only one 
getXferAddress method

> The DatanodeID "name" field is overloaded 
> --
>
> Key: HDFS-3171
> URL: https://issues.apache.org/jira/browse/HDFS-3171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3171.txt
>
>
> The DatanodeID "name" field is currently overloaded, when the DN creates a 
> DatanodeID to register with the NN it sets "name" to be the datanode 
> hostname, which is the DN's "hostName" member. This isnot necesarily a FQDN, 
> it is either set explicitly or determined by the DNS class, which could 
> return the machine's hostname or the result of a DNS lookup, if configured to 
> do so. The NN then clobbers the "name" field of the DatanodeID with the IP 
> part of the new DatanodeID "name" field it creates (and sets the DatanodeID 
> "hostName" field to the reported "name"). The DN gets the DatanodeID back 
> from the NN and clobbers its "hostName" member with the "name" field of the 
> returned DatanodeID. This makes the code hard to reason about eg 
> DN#getMachine name sometimes returns a hostname and sometimes not, depending 
> on when it's called in sequence with the registration. Ditto for uses of the 
> "name" field. I think these contortions were originally performed because the 
> DatanodeID didn't have a hostName field (it was part of DatanodeInfo) and so 
> there was no way to communicate both at the same time. Now that the hostName 
> field is in DatanodeID (as of HDFS-3164) we can establish the invariant that 
> the "name" field always and only has an IP address and the "hostName" field 
> always and only has a hostname.
> In HDFS-3144 I'm going to rename the "name" field so its clear that it 
> contains an IP address. The above is enough scope for one change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2656) Implement a pure c client based on webhdfs

2012-03-31 Thread donal zang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243634#comment-13243634
 ] 

donal zang commented on HDFS-2656:
--

Hi Zhanwei,
I'm interested in the c client.
But one thing I'm worrying about is the performance, since it uses http 
protocol.
Have you tested or thought about this?

> Implement a pure c client based on webhdfs
> --
>
> Key: HDFS-2656
> URL: https://issues.apache.org/jira/browse/HDFS-2656
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Zhanwei.Wang
>
> Currently, the implementation of libhdfs is based on JNI. The overhead of JVM 
> seems a little big, and libhdfs can also not be used in the environment 
> without hdfs.
> It seems a good idea to implement a pure c client by wrapping webhdfs. It 
> also can be used to access different version of hdfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3094) add -nonInteractive and -force option to namenode -format command

2012-03-31 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243630#comment-13243630
 ] 

Todd Lipcon commented on HDFS-3094:
---

Style nits:
{code}
+for(i=i+1 ; i < argsLen ; i++) {
{code}
space after "for", no space before ";"s, spaces in "i = i + 1"


{code}
+  // as no cluster id specified, return null
{code}
should read "*if* no cluster id specified"


{code}
+// make sure the user did not send -force or -noninteractive as an
+// option after -clusterid or send no id after -clusterid
{code}
replace "send" with "specify" (nothing's being sent here): "make sure the user 
did not specify -force or -nonInteractive as an option after -clusterId, or 
forget to specify an ID after -clusterId". Or maybe just "Make sure an id is 
specified, and not another flag."

This comment inside the block seems redundant with the one above:
{code}
+  // return null if the user sent something like
+  // clusterid -force or clusterid -nonInteractive
{code}
I think you can remove it.

However, it seems like you should log a warning here with their mistake, 
something like:
{code}
LOG.fatal("Must specify a valid cluster ID after the -clusterId flag")
{code}



{code}
+if (clusterId.isEmpty()
+|| clusterId.equalsIgnoreCase(StartupOption.FORCE.getName())
+|| clusterId.equalsIgnoreCase(StartupOption.NONINTERACTIVE
+.getName())) {
{code}
formatting: ||s should go on the line before. ie:
{code}
if (clusterId.isEmpty() ||
clusterId.equalsIgnoreCase(StartupOption.FORCE.getName()) ||
clusterId.equalsIgnoreCase(
StartupOption.NONINTERACTIVE.getName())) {
{code}


{code}
-boolean aborted = format(conf, false);
+  boolean aborted = format(conf, startOpt.getForce(),
{code}
bad indentation on this line


{code}
+  protected static class ExitException extends SecurityException {
{code}
Why is this protected instead of private?

Please move the inner classes to the bottom of the file


In the tests, please change the '//' style comments before each test case to be 
javadoc style

- in all the test cases, I think you need to add the line:
{code}
fail("createNameNode() did not call System.exit()")
{code}
inside the {{try}} clause. Otherwise if the code just returned without exiting, 
we wouldn't catch the bug.


{code}
+final Configuration config = new Configuration();
+config.set(DFS_NAMENODE_NAME_DIR_KEY, hdfsDir.getPath());
{code}
This code shows up in almost all the cases. Can it be done in the setup method?

> add -nonInteractive and -force option to namenode -format command
> -
>
> Key: HDFS-3094
> URL: https://issues.apache.org/jira/browse/HDFS-3094
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.24.0, 1.0.2
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Attachments: HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, 
> HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, 
> HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, HDFS-3094.patch, 
> HDFS-3094.patch, HDFS-3094.patch, HDFS-3094.patch
>
>
> Currently the bin/hadoop namenode -format prompts the user for a Y/N to setup 
> the directories in the local file system.
> -force : namenode formats the directories without prompting
> -nonInterActive : namenode format will return with an exit code of 1 if the 
> dir exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3094) add -nonInteractive and -force option to namenode -format command

2012-03-31 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3094:
--

Comment: was deleted

(was: Almost everything below is style/formatting nits. In general, try to 
match the style of the surrounding code. Otherwise looks good, thanks for 
adding this nice improvement!



{code}
+for(i=i+1 ; i < argsLen ; i++) {
{code}
space after "for", no space before ";"s, spaces in "i = i + 1"


{code}
+  // as no cluster id specified, return null
{code}
should read "*if* no cluster id specified"


{code}
+// make sure the user did not send -force or -noninteractive as an
+// option after -clusterid or send no id after -clusterid
{code}
replace "send" with "specify" (nothing's being sent here): "make sure the user 
did not specify -force or -nonInteractive as an option after -clusterId, or 
forget to specify an ID after -clusterId". Or maybe just "Make sure an id is 
specified, and not another flag."

This comment inside the block seems redundant with the one above:
{code}
+  // return null if the user sent something like
+  // clusterid -force or clusterid -nonInteractive
{code}
I think you can remove it.

However, it seems like you should log a warning here with their mistake, 
something like:
{code}
LOG.fatal("Must specify a valid cluster ID after the -clusterId flag")
{code}



{code}
+if (clusterId.isEmpty()
+|| clusterId.equalsIgnoreCase(StartupOption.FORCE.getName())
+|| clusterId.equalsIgnoreCase(StartupOption.NONINTERACTIVE
+.getName())) {
{code}
formatting: ||s should go on the line before. ie:
{code}
if (clusterId.isEmpty() ||
clusterId.equalsIgnoreCase(StartupOption.FORCE.getName()) ||
clusterId.equalsIgnoreCase(
StartupOption.NONINTERACTIVE.getName())) {


{code}
-boolean aborted = format(conf, false);
+  boolean aborted = format(conf, startOpt.getForce(),
{code}
bad indentation on this line


{code}
+  protected static class ExitException extends SecurityException {
{code}
Why is this protected instead of private?

Please move the inner classes to the bottom of the file


In the tests, please change the '//' style comments before each test case to be 
javadoc style

- in all the test cases, I think you need to add the line:
{code}
fail("createNameNode() did not call System.exit()")
{code}
inside the {{try}} clause. Otherwise if the code just returned without exiting, 
we wouldn't catch the bug.


{code}
+final Configuration config = new Configuration();
+config.set(DFS_NAMENODE_NAME_DIR_KEY, hdfsDir.getPath());
{code}
This code shows up in almost all the cases. Can it be done in the setup method?)

> add -nonInteractive and -force option to namenode -format command
> -
>
> Key: HDFS-3094
> URL: https://issues.apache.org/jira/browse/HDFS-3094
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.24.0, 1.0.2
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Attachments: HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, 
> HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, 
> HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, HDFS-3094.patch, 
> HDFS-3094.patch, HDFS-3094.patch, HDFS-3094.patch
>
>
> Currently the bin/hadoop namenode -format prompts the user for a Y/N to setup 
> the directories in the local file system.
> -force : namenode formats the directories without prompting
> -nonInterActive : namenode format will return with an exit code of 1 if the 
> dir exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3094) add -nonInteractive and -force option to namenode -format command

2012-03-31 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243629#comment-13243629
 ] 

Todd Lipcon commented on HDFS-3094:
---

Almost everything below is style/formatting nits. In general, try to match the 
style of the surrounding code. Otherwise looks good, thanks for adding this 
nice improvement!



{code}
+for(i=i+1 ; i < argsLen ; i++) {
{code}
space after "for", no space before ";"s, spaces in "i = i + 1"


{code}
+  // as no cluster id specified, return null
{code}
should read "*if* no cluster id specified"


{code}
+// make sure the user did not send -force or -noninteractive as an
+// option after -clusterid or send no id after -clusterid
{code}
replace "send" with "specify" (nothing's being sent here): "make sure the user 
did not specify -force or -nonInteractive as an option after -clusterId, or 
forget to specify an ID after -clusterId". Or maybe just "Make sure an id is 
specified, and not another flag."

This comment inside the block seems redundant with the one above:
{code}
+  // return null if the user sent something like
+  // clusterid -force or clusterid -nonInteractive
{code}
I think you can remove it.

However, it seems like you should log a warning here with their mistake, 
something like:
{code}
LOG.fatal("Must specify a valid cluster ID after the -clusterId flag")
{code}



{code}
+if (clusterId.isEmpty()
+|| clusterId.equalsIgnoreCase(StartupOption.FORCE.getName())
+|| clusterId.equalsIgnoreCase(StartupOption.NONINTERACTIVE
+.getName())) {
{code}
formatting: ||s should go on the line before. ie:
{code}
if (clusterId.isEmpty() ||
clusterId.equalsIgnoreCase(StartupOption.FORCE.getName()) ||
clusterId.equalsIgnoreCase(
StartupOption.NONINTERACTIVE.getName())) {


{code}
-boolean aborted = format(conf, false);
+  boolean aborted = format(conf, startOpt.getForce(),
{code}
bad indentation on this line


{code}
+  protected static class ExitException extends SecurityException {
{code}
Why is this protected instead of private?

Please move the inner classes to the bottom of the file


In the tests, please change the '//' style comments before each test case to be 
javadoc style

- in all the test cases, I think you need to add the line:
{code}
fail("createNameNode() did not call System.exit()")
{code}
inside the {{try}} clause. Otherwise if the code just returned without exiting, 
we wouldn't catch the bug.


{code}
+final Configuration config = new Configuration();
+config.set(DFS_NAMENODE_NAME_DIR_KEY, hdfsDir.getPath());
{code}
This code shows up in almost all the cases. Can it be done in the setup method?

> add -nonInteractive and -force option to namenode -format command
> -
>
> Key: HDFS-3094
> URL: https://issues.apache.org/jira/browse/HDFS-3094
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.24.0, 1.0.2
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Attachments: HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, 
> HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, 
> HDFS-3094.branch-1.0.patch, HDFS-3094.branch-1.0.patch, HDFS-3094.patch, 
> HDFS-3094.patch, HDFS-3094.patch, HDFS-3094.patch
>
>
> Currently the bin/hadoop namenode -format prompts the user for a Y/N to setup 
> the directories in the local file system.
> -force : namenode formats the directories without prompting
> -nonInterActive : namenode format will return with an exit code of 1 if the 
> dir exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3171) The DatanodeID "name" field is overloaded

2012-03-31 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243627#comment-13243627
 ] 

Aaron T. Myers commented on HDFS-3171:
--

Patch looks pretty good to me. Just a few nits follow. +1 once these are 
addressed.

# "the reported IP may not service IPC requests." - ambiguous whether "may not" 
means "might not" or "cannot".
# "determined automatically by performing a DNS lookup on the localhost IP." - 
you don't literally mean "localhost IP" here, do you? i.e. 127.0.0.1? Perhaps 
"the host's IP" ?
# I find the lack of meaningful name distinction between "getXferAddress" and 
"getXferAddr" unfortunate. How about rename "getXferAddr" to 
"getXferAddressAsString", and implement it in terms of getXferAddress? Also, in 
some places you call "getXferAddress().toString()", which seems like it should 
just use getXferAddress, right? We could even just scrap the one that returns a 
string and always call .toString() on the InetSocketAddress.

> The DatanodeID "name" field is overloaded 
> --
>
> Key: HDFS-3171
> URL: https://issues.apache.org/jira/browse/HDFS-3171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3171.txt
>
>
> The DatanodeID "name" field is currently overloaded, when the DN creates a 
> DatanodeID to register with the NN it sets "name" to be the datanode 
> hostname, which is the DN's "hostName" member. This isnot necesarily a FQDN, 
> it is either set explicitly or determined by the DNS class, which could 
> return the machine's hostname or the result of a DNS lookup, if configured to 
> do so. The NN then clobbers the "name" field of the DatanodeID with the IP 
> part of the new DatanodeID "name" field it creates (and sets the DatanodeID 
> "hostName" field to the reported "name"). The DN gets the DatanodeID back 
> from the NN and clobbers its "hostName" member with the "name" field of the 
> returned DatanodeID. This makes the code hard to reason about eg 
> DN#getMachine name sometimes returns a hostname and sometimes not, depending 
> on when it's called in sequence with the registration. Ditto for uses of the 
> "name" field. I think these contortions were originally performed because the 
> DatanodeID didn't have a hostName field (it was part of DatanodeInfo) and so 
> there was no way to communicate both at the same time. Now that the hostName 
> field is in DatanodeID (as of HDFS-3164) we can establish the invariant that 
> the "name" field always and only has an IP address and the "hostName" field 
> always and only has a hostname.
> In HDFS-3144 I'm going to rename the "name" field so its clear that it 
> contains an IP address. The above is enough scope for one change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1599) Umbrella Jira for Improving HBASE support in HDFS

2012-03-31 Thread Jonathan Hsieh (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243626#comment-13243626
 ] 

Jonathan Hsieh commented on HDFS-1599:
--

I looked into the history of #9 (HDFS-2412, HDFS-1620).  It was suggested that 
the enums are essentially final classes and we can't shim in the SafeModeAction 
enum into the FSConstants via subclassing. 

> Umbrella Jira for Improving HBASE support in HDFS
> -
>
> Key: HDFS-1599
> URL: https://issues.apache.org/jira/browse/HDFS-1599
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sanjay Radia
>
> Umbrella Jira for improved HBase support in HDFS

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3171) The DatanodeID "name" field is overloaded

2012-03-31 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243624#comment-13243624
 ] 

Eli Collins commented on HDFS-3171:
---

I realized I can also simplify the includes/excludes checking in 
DatanodeManager#registerDatanode now, it takes a DatanodeID and IP address 
because the ID didn't have both, now that it does we can just use the ID. I'll 
do this cleanup in HDFS-3144 though since it will be more clear there.

> The DatanodeID "name" field is overloaded 
> --
>
> Key: HDFS-3171
> URL: https://issues.apache.org/jira/browse/HDFS-3171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3171.txt
>
>
> The DatanodeID "name" field is currently overloaded, when the DN creates a 
> DatanodeID to register with the NN it sets "name" to be the datanode 
> hostname, which is the DN's "hostName" member. This isnot necesarily a FQDN, 
> it is either set explicitly or determined by the DNS class, which could 
> return the machine's hostname or the result of a DNS lookup, if configured to 
> do so. The NN then clobbers the "name" field of the DatanodeID with the IP 
> part of the new DatanodeID "name" field it creates (and sets the DatanodeID 
> "hostName" field to the reported "name"). The DN gets the DatanodeID back 
> from the NN and clobbers its "hostName" member with the "name" field of the 
> returned DatanodeID. This makes the code hard to reason about eg 
> DN#getMachine name sometimes returns a hostname and sometimes not, depending 
> on when it's called in sequence with the registration. Ditto for uses of the 
> "name" field. I think these contortions were originally performed because the 
> DatanodeID didn't have a hostName field (it was part of DatanodeInfo) and so 
> there was no way to communicate both at the same time. Now that the hostName 
> field is in DatanodeID (as of HDFS-3164) we can establish the invariant that 
> the "name" field always and only has an IP address and the "hostName" field 
> always and only has a hostname.
> In HDFS-3144 I'm going to rename the "name" field so its clear that it 
> contains an IP address. The above is enough scope for one change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3004) Implement Recovery Mode

2012-03-31 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243623#comment-13243623
 ] 

Todd Lipcon commented on HDFS-3004:
---

Looking pretty good, just style nits:

{code}
+/* This is a trivial implementation which just assumes that any errors mean
+ * that there is nothing more of value in the log.  You can override this.
+ */
{code}
Style: use // comments for inline comments. I think it's clearer to say 
"subclasses will likely want to override this" instead of "You can override 
this". The other option, which I think is somewhat reasonable, is to just do: 
{{throw new UnsupportedOperationException(this.getClass() + " does not support 
resyncing to next edit");}} since it seems like the implementation that's there 
now is worse than just failing.


{code}
+   * After this function returns, the next call to reaOp will return either
{code}
typo: readOp



bq. Basically, we NEVER want to apply a transaction that has a lower or equal 
ID to the previous one. That's why the ''continue'' is there in the else 
clause. We will try to recover from an edit log with gaps in it, though. (That 
is sort of the point of recovery).

I can see cases where we might want to "apply the out-of-order edit anyway". 
But let's leave that for a follow-up JIRA.



{code}
+LOG.error("Encountered exception on operation " +
+  op.toString() + ": \n" + StringUtils.stringifyException(e));
{code}
Use the second argument of LOG.error to log the exception, rather than using 
stringifyException.


{code}
+}
+finally {
{code}
style: combine to one line. Also an example of this inside 
EltsTestGarbageInEditLog later in the patch.


{code}
+/* If we encountered an exception or an end-of-file condition,
+ * do not advance the input stream. */
{code}
// comments


{code}
+  if (!skipBrokenEdits)
+throw e;
+  if (in.skip(1) < 1)
+return null;
...
+  if (response.equalsIgnoreCase(firstChoice))
+return firstChoice;
...
+if (operation == StartupOption.RECOVER)
+  return;
{code}
need {} braces (also a few other places).

For simple "return" or "break", you can put it on the same line as the if, 
without braces, if it fits, but otherwise, we always use {}s


{code}
+} else {
+assertEquals(prevTxId, elts.getLastValidTxId());
+}
{code}
indentation


{code}
+  public static Set setFromArray(long[] arr) {
...
{code}

Instead of this, you can just use Sets.newHashSet(1,2,3...) from guava



{code}
+  Set  validTxIds = elts.getValidTxIds();
{code}
no space between {{Set}} and {{}}


{code}
+  if ((elfos != null) && (elfos.isOpen()))
+elfos.close();
+  if (elfis != null)
+elfis.close();
{code}
use IOUtils.cleanup


> Implement Recovery Mode
> ---
>
> Key: HDFS-3004
> URL: https://issues.apache.org/jira/browse/HDFS-3004
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: tools
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-3004.010.patch, HDFS-3004.011.patch, 
> HDFS-3004.012.patch, HDFS-3004.013.patch, HDFS-3004.015.patch, 
> HDFS-3004.016.patch, HDFS-3004.017.patch, HDFS-3004.018.patch, 
> HDFS-3004.019.patch, HDFS-3004.020.patch, HDFS-3004.022.patch, 
> HDFS-3004.023.patch, HDFS-3004.024.patch, HDFS-3004.026.patch, 
> HDFS-3004.027.patch, HDFS-3004.029.patch, HDFS-3004.030.patch, 
> HDFS-3004.031.patch, HDFS-3004.032.patch, HDFS-3004.033.patch, 
> HDFS-3004.034.patch, HDFS-3004.035.patch, HDFS-3004.036.patch, 
> HDFS-3004__namenode_recovery_tool.txt
>
>
> When the NameNode metadata is corrupt for some reason, we want to be able to 
> fix it.  Obviously, we would prefer never to get in this case.  In a perfect 
> world, we never would.  However, bad data on disk can happen from time to 
> time, because of hardware errors or misconfigurations.  In the past we have 
> had to correct it manually, which is time-consuming and which can result in 
> downtime.
> Recovery mode is initialized by the system administrator.  When the NameNode 
> starts up in Recovery Mode, it will try to load the FSImage file, apply all 
> the edits from the edits log, and then write out a new image.  Then it will 
> shut down.
> Unlike in the normal startup process, the recovery mode startup process will 
> be interactive.  When the NameNode finds something that is inconsistent, it 
> will prompt the operator as to what it should do.   The operator can also 
> choose to take the first option for all prompts by starting up with the '-f' 
> flag, or typing 'a' at one of the prompts.
> I have reused as much code as possible from the NameNode in this tool.  
> Hopefully, the ef

[jira] [Commented] (HDFS-3150) Add option for clients to contact DNs via hostname in branch-1

2012-03-31 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243614#comment-13243614
 ] 

Todd Lipcon commented on HDFS-3150:
---

Mostly looks good, just some nits:

{code}
+LOG.info("Opened streaming server at " + tmpPort);
{code}
This isn't the terminology used elsewhere. "Data transfer server" or "data 
transceiver server" is better


{code}
 // Connect to backup machine
+final String dnName = targets[0].getName(connectToDnViaHostname);
{code}
I think better to call this {{mirrorName}} or {{mirrorAddrString}}


{code}
+  final String dnName = proxySource.getName(connectToDnViaHostname);
+  InetSocketAddress proxyAddr = NetUtils.createSocketAddr(dnName);
{code}
Similar here -- {{proxyDnName}} or {{proxyAddrString}}


> Add option for clients to contact DNs via hostname in branch-1
> --
>
> Key: HDFS-3150
> URL: https://issues.apache.org/jira/browse/HDFS-3150
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: data-node, hdfs client
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3150-b1.txt
>
>
> Per the document attached to HADOOP-8198, this is just for branch-1, and 
> unbreaks DN multihoming. The datanode can be configured to listen on a bond, 
> or all interfaces by specifying the wildcard in the dfs.datanode.*.address 
> configuration options, however per HADOOP-6867 only the source address of the 
> registration is exposed to clients. HADOOP-985 made clients access datanodes 
> by IP primarily to avoid the latency of a DNS lookup, this had the side 
> effect of breaking DN multihoming. In order to fix it let's add back the 
> option for Datanodes to be accessed by hostname. This can be done by:
> # Modifying the primary field of the Datanode descriptor to be the hostname, 
> or 
> # Modifying Client/Datanode <-> Datanode access use the hostname field 
> instead of the IP
> I'd like to go with approach #2 as it does not require making an incompatible 
> change to the client protocol, and is much less invasive. It minimizes the 
> scope of modification to just places where clients and Datanodes connect, vs 
> changing all uses of Datanode identifiers.
> New client and Datanode configuration options are introduced:
> - {{dfs.client.use.datanode.hostname}} indicates all client to datanode 
> connections should use the datanode hostname (as clients outside cluster may 
> not be able to route the IP)
> - {{dfs.datanode.use.datanode.hostname}} indicates whether Datanodes should 
> use hostnames when connecting to other Datanodes for data transfer
> If the configuration options are not used, there is no change in the current 
> behavior.
> I'm doing something similar to #1 btw in trunk in HDFS-3144 - refactoring the 
> use of DatanodeID to use the right field (IP, IP:xferPort, hostname, etc) 
> based on the context the ID is being used in, vs always using the IP:xferPort 
> as the Datanode's name, and using the name everywhere.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3171) The DatanodeID "name" field is overloaded

2012-03-31 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243610#comment-13243610
 ] 

Hadoop QA commented on HDFS-3171:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520805/hdfs-3171.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 33 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2144//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2144//console

This message is automatically generated.

> The DatanodeID "name" field is overloaded 
> --
>
> Key: HDFS-3171
> URL: https://issues.apache.org/jira/browse/HDFS-3171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3171.txt
>
>
> The DatanodeID "name" field is currently overloaded, when the DN creates a 
> DatanodeID to register with the NN it sets "name" to be the datanode 
> hostname, which is the DN's "hostName" member. This isnot necesarily a FQDN, 
> it is either set explicitly or determined by the DNS class, which could 
> return the machine's hostname or the result of a DNS lookup, if configured to 
> do so. The NN then clobbers the "name" field of the DatanodeID with the IP 
> part of the new DatanodeID "name" field it creates (and sets the DatanodeID 
> "hostName" field to the reported "name"). The DN gets the DatanodeID back 
> from the NN and clobbers its "hostName" member with the "name" field of the 
> returned DatanodeID. This makes the code hard to reason about eg 
> DN#getMachine name sometimes returns a hostname and sometimes not, depending 
> on when it's called in sequence with the registration. Ditto for uses of the 
> "name" field. I think these contortions were originally performed because the 
> DatanodeID didn't have a hostName field (it was part of DatanodeInfo) and so 
> there was no way to communicate both at the same time. Now that the hostName 
> field is in DatanodeID (as of HDFS-3164) we can establish the invariant that 
> the "name" field always and only has an IP address and the "hostName" field 
> always and only has a hostname.
> In HDFS-3144 I'm going to rename the "name" field so its clear that it 
> contains an IP address. The above is enough scope for one change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-798) Exclude second Ant JAR from classpath in hdfs builds

2012-03-31 Thread Konstantin Shvachko (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243599#comment-13243599
 ] 

Konstantin Shvachko commented on HDFS-798:
--

I agree with Steve, building in eclipse works strange for me. Compile is fine, 
but if I try run-commit-test it never runs tests, while without the patch it 
does.

> Exclude second Ant JAR from classpath in hdfs builds
> 
>
> Key: HDFS-798
> URL: https://issues.apache.org/jira/browse/HDFS-798
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Steve Loughran
>Assignee: Konstantin Boudnik
>Priority: Minor
> Attachments: HDFS-798.patch
>
>
> I've no evidence that this is a problem, but I have known it to be in 
> different projects:
> {code}
> [junit] WARNING: multiple versions of ant detected in path for junit 
> [junit]  
> jar:file:/Users/slo/Java/Apache/ant/lib/ant.jar!/org/apache/tools/ant/Project.class
> [junit]  and 
> jar:file:/Users/slo/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
> {code}
> Somehow Ivy needs to be set up to skip pulling in an old version of Ant in 
> the build -both paranamer-ant and jsp-2.1 declare a dependency on it. If both 
> tools are only ever run under Ant, the ivy.xml file could exclude it, the 
> build file just has to make sure that Ant's own classpath gets passed down.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3148) The client should be able to use multiple local interfaces for data transfer

2012-03-31 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243592#comment-13243592
 ] 

Todd Lipcon commented on HDFS-3148:
---

- I think it makes more sense to make {{getLocalInterfaceAddrs}} static, and 
take {{localInterfaces}} as a parameter. 


{code}
+  public static final String  DFS_CLIENT_LOCAL_INTERFACES = 
"dfs.client.local.interfaces";
{code}
Move this higher in the file, near the other DFS_CLIENT configs


{code}
+final int idx = r.nextInt(localInterfaceAddrs.length);
+final SocketAddress addr = localInterfaceAddrs[idx];
+if (LOG.isDebugEnabled()) {
+  LOG.debug("Using local interface " + localInterfaces[idx] + " " + addr);
{code}
This doesn't seem right, since {{localInterfaces}} and {{localInterfaceAddrs}} 
may have different lengths -- a given configured local interface could have 
multiple addrs  in the {{localInterfaceAddrs}} list.

This brings up another question: if a NIC has multiple IPs, should it be 
weighted in the load balancing based on the number of IPs assigned? That 
doesn't seem right.

Maybe the right solution to both of these issues is to actually require that 
the list of addresses decided upon has at most one IP corresponding to each 
device?

Another possibility is that you could change the member variable to a 
MultiMap -- first randomly choose a key from the map, 
and then randomly choose among that key's values. My hunch is this would give 
the right behavior most of the time.


{code}
+  A comma separate list of network interface names to use
+for data transfer between the client and datanodes. When creating
{code}
typo: comma separate*d* list


> The client should be able to use multiple local interfaces for data transfer
> 
>
> Key: HDFS-3148
> URL: https://issues.apache.org/jira/browse/HDFS-3148
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs client
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3148-b1.txt, hdfs-3148.txt
>
>
> HDFS-3147 covers using multiple interfaces on the server (Datanode) side. 
> Clients should also be able to utilize multiple *local* interfaces for 
> outbound connections instead of always using the interface for the local 
> hostname. This can be accomplished with a new configuration parameter 
> ({{dfs.client.local.interfaces}}) that accepts a list of interfaces the 
> client should use. Acceptable configuration values are the same as the 
> {{dfs.datanode.available.interfaces}} parameter. The client binds its socket 
> to a specific interface, which enables outbound traffic to use that 
> interface. Binding the client socket to a specific address is not sufficient 
> to ensure egress traffic uses that interface. Eg if multiple interfaces are 
> on the same subnet the host requires IP rules that use the source address 
> (which bind sets) to select the destination interface. The SO_BINDTODEVICE 
> socket option could be used to select a specific interface for the connection 
> instead, however it requires JNI (is not in Java's SocketOptions) and root 
> access, which we don't want to require clients have.
> Like HDFS-3147, the client can use multiple local interfaces for data 
> transfer. Since the client already cache their connections to DNs choosing a 
> local interface at random seems like a good policy. Users can also pin a 
> specific client to a specific interface by specifying just that interface in 
> dfs.client.local.interfaces.
> This change was discussed in HADOOP-6210 a while back, and is actually 
> useful/independent of the other HDFS-3140 changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2609) DataNode.getDNRegistrationByMachineName can probably be removed or simplified

2012-03-31 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-2609.
---

Resolution: Later

Re-resolving with "Later" instead of "Fixed" since it's not fixed yet.

> DataNode.getDNRegistrationByMachineName can probably be removed or simplified
> -
>
> Key: HDFS-2609
> URL: https://issues.apache.org/jira/browse/HDFS-2609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Eli Collins
>
> I noticed this while working on HDFS-1971: The 
> {{getDNRegistrationByMachineName}} iterates over block pools to return a 
> given block pool's registration object based on its {{machineName}} field. 
> But, the machine name for every BPOfferService is identical - they're always 
> constructed by just calling {{DataNode.getName}}. All of the call sites for 
> this function are from tests, as well. So, maybe it's not necessary, or at 
> least it might be able to be simplified or moved to a test method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HDFS-2609) DataNode.getDNRegistrationByMachineName can probably be removed or simplified

2012-03-31 Thread Todd Lipcon (Reopened) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reopened HDFS-2609:
---


> DataNode.getDNRegistrationByMachineName can probably be removed or simplified
> -
>
> Key: HDFS-2609
> URL: https://issues.apache.org/jira/browse/HDFS-2609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Eli Collins
>
> I noticed this while working on HDFS-1971: The 
> {{getDNRegistrationByMachineName}} iterates over block pools to return a 
> given block pool's registration object based on its {{machineName}} field. 
> But, the machine name for every BPOfferService is identical - they're always 
> constructed by just calling {{DataNode.getName}}. All of the call sites for 
> this function are from tests, as well. So, maybe it's not necessary, or at 
> least it might be able to be simplified or moved to a test method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3171) The DatanodeID "name" field is overloaded

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3171:
--

Attachment: hdfs-3171.txt

> The DatanodeID "name" field is overloaded 
> --
>
> Key: HDFS-3171
> URL: https://issues.apache.org/jira/browse/HDFS-3171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3171.txt
>
>
> The DatanodeID "name" field is currently overloaded, when the DN creates a 
> DatanodeID to register with the NN it sets "name" to be the datanode 
> hostname, which is the DN's "hostName" member. This isnot necesarily a FQDN, 
> it is either set explicitly or determined by the DNS class, which could 
> return the machine's hostname or the result of a DNS lookup, if configured to 
> do so. The NN then clobbers the "name" field of the DatanodeID with the IP 
> part of the new DatanodeID "name" field it creates (and sets the DatanodeID 
> "hostName" field to the reported "name"). The DN gets the DatanodeID back 
> from the NN and clobbers its "hostName" member with the "name" field of the 
> returned DatanodeID. This makes the code hard to reason about eg 
> DN#getMachine name sometimes returns a hostname and sometimes not, depending 
> on when it's called in sequence with the registration. Ditto for uses of the 
> "name" field. I think these contortions were originally performed because the 
> DatanodeID didn't have a hostName field (it was part of DatanodeInfo) and so 
> there was no way to communicate both at the same time. Now that the hostName 
> field is in DatanodeID (as of HDFS-3164) we can establish the invariant that 
> the "name" field always and only has an IP address and the "hostName" field 
> always and only has a hostname.
> In HDFS-3144 I'm going to rename the "name" field so its clear that it 
> contains an IP address. The above is enough scope for one change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3110) libhdfs implementation of direct read API

2012-03-31 Thread Henry Robinson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243300#comment-13243300
 ] 

Henry Robinson commented on HDFS-3110:
--

Fyi, I've added tests, but when I run with the libhdfs test script I get 
instances of ChecksumFileSystem back which are no good for this case because 
they don't support the read(ByteBuffer) interface. So I've added a class to 
HDFS-3167 that allows us to spin up a simple cluster for correctness testing 
very easily, and if that goes in I'll be able to update the test script 
accordingly.

> libhdfs implementation of direct read API
> -
>
> Key: HDFS-3110
> URL: https://issues.apache.org/jira/browse/HDFS-3110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Reporter: Henry Robinson
>Assignee: Henry Robinson
> Fix For: 0.24.0
>
> Attachments: HDFS-3110.0.patch
>
>
> Once HDFS-2834 gets committed, we can add support for the new API to libhdfs, 
> which leads to significant performance increases when reading local data from 
> C.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3171) The DatanodeID "name" field is overloaded

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3171:
--

Attachment: (was: hdfs-3171.txt)

> The DatanodeID "name" field is overloaded 
> --
>
> Key: HDFS-3171
> URL: https://issues.apache.org/jira/browse/HDFS-3171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
>
> The DatanodeID "name" field is currently overloaded, when the DN creates a 
> DatanodeID to register with the NN it sets "name" to be the datanode 
> hostname, which is the DN's "hostName" member. This isnot necesarily a FQDN, 
> it is either set explicitly or determined by the DNS class, which could 
> return the machine's hostname or the result of a DNS lookup, if configured to 
> do so. The NN then clobbers the "name" field of the DatanodeID with the IP 
> part of the new DatanodeID "name" field it creates (and sets the DatanodeID 
> "hostName" field to the reported "name"). The DN gets the DatanodeID back 
> from the NN and clobbers its "hostName" member with the "name" field of the 
> returned DatanodeID. This makes the code hard to reason about eg 
> DN#getMachine name sometimes returns a hostname and sometimes not, depending 
> on when it's called in sequence with the registration. Ditto for uses of the 
> "name" field. I think these contortions were originally performed because the 
> DatanodeID didn't have a hostName field (it was part of DatanodeInfo) and so 
> there was no way to communicate both at the same time. Now that the hostName 
> field is in DatanodeID (as of HDFS-3164) we can establish the invariant that 
> the "name" field always and only has an IP address and the "hostName" field 
> always and only has a hostname.
> In HDFS-3144 I'm going to rename the "name" field so its clear that it 
> contains an IP address. The above is enough scope for one change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3171) The DatanodeID "name" field is overloaded

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3171:
--

Attachment: hdfs-3171.txt

Patch attached.

> The DatanodeID "name" field is overloaded 
> --
>
> Key: HDFS-3171
> URL: https://issues.apache.org/jira/browse/HDFS-3171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3171.txt
>
>
> The DatanodeID "name" field is currently overloaded, when the DN creates a 
> DatanodeID to register with the NN it sets "name" to be the datanode 
> hostname, which is the DN's "hostName" member. This isnot necesarily a FQDN, 
> it is either set explicitly or determined by the DNS class, which could 
> return the machine's hostname or the result of a DNS lookup, if configured to 
> do so. The NN then clobbers the "name" field of the DatanodeID with the IP 
> part of the new DatanodeID "name" field it creates (and sets the DatanodeID 
> "hostName" field to the reported "name"). The DN gets the DatanodeID back 
> from the NN and clobbers its "hostName" member with the "name" field of the 
> returned DatanodeID. This makes the code hard to reason about eg 
> DN#getMachine name sometimes returns a hostname and sometimes not, depending 
> on when it's called in sequence with the registration. Ditto for uses of the 
> "name" field. I think these contortions were originally performed because the 
> DatanodeID didn't have a hostName field (it was part of DatanodeInfo) and so 
> there was no way to communicate both at the same time. Now that the hostName 
> field is in DatanodeID (as of HDFS-3164) we can establish the invariant that 
> the "name" field always and only has an IP address and the "hostName" field 
> always and only has a hostname.
> In HDFS-3144 I'm going to rename the "name" field so its clear that it 
> contains an IP address. The above is enough scope for one change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3171) The DatanodeID "name" field is overloaded

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3171:
--

Status: Patch Available  (was: Open)

> The DatanodeID "name" field is overloaded 
> --
>
> Key: HDFS-3171
> URL: https://issues.apache.org/jira/browse/HDFS-3171
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3171.txt
>
>
> The DatanodeID "name" field is currently overloaded, when the DN creates a 
> DatanodeID to register with the NN it sets "name" to be the datanode 
> hostname, which is the DN's "hostName" member. This isnot necesarily a FQDN, 
> it is either set explicitly or determined by the DNS class, which could 
> return the machine's hostname or the result of a DNS lookup, if configured to 
> do so. The NN then clobbers the "name" field of the DatanodeID with the IP 
> part of the new DatanodeID "name" field it creates (and sets the DatanodeID 
> "hostName" field to the reported "name"). The DN gets the DatanodeID back 
> from the NN and clobbers its "hostName" member with the "name" field of the 
> returned DatanodeID. This makes the code hard to reason about eg 
> DN#getMachine name sometimes returns a hostname and sometimes not, depending 
> on when it's called in sequence with the registration. Ditto for uses of the 
> "name" field. I think these contortions were originally performed because the 
> DatanodeID didn't have a hostName field (it was part of DatanodeInfo) and so 
> there was no way to communicate both at the same time. Now that the hostName 
> field is in DatanodeID (as of HDFS-3164) we can establish the invariant that 
> the "name" field always and only has an IP address and the "hostName" field 
> always and only has a hostname.
> In HDFS-3144 I'm going to rename the "name" field so its clear that it 
> contains an IP address. The above is enough scope for one change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3172) dfs.upgrade.permission is dead code

2012-03-31 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243288#comment-13243288
 ] 

Hudson commented on HDFS-3172:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1972 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1972/])
HDFS-3172. dfs.upgrade.permission is dead code. Contributed by Eli Collins 
(Revision 1307888)

 Result = ABORTED
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1307888
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/DeprecatedProperties.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> dfs.upgrade.permission is dead code
> ---
>
> Key: HDFS-3172
> URL: https://issues.apache.org/jira/browse/HDFS-3172
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: hdfs-3172.txt
>
>
> As of HDFS-3137 dfs.upgrade.permission is dead code (was only used for 
> upgrading from old, no longer supported releases).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3164) Move DatanodeInfo#hostName to DatanodeID

2012-03-31 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243287#comment-13243287
 ] 

Hudson commented on HDFS-3164:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1972 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1972/])
HDFS-3164. Move DatanodeInfo#hostName to DatanodeID. Contributed by Eli 
Collins (Revision 1307890)

 Result = ABORTED
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1307890
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMultipleRegistrations.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestInterDatanodeProtocol.java


> Move DatanodeInfo#hostName to DatanodeID
> 
>
> Key: HDFS-3164
> URL: https://issues.apache.org/jira/browse/HDFS-3164
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.0
>
> Attachments: hdfs-3164.txt, hdfs-3164.txt
>
>
> Like HDFS-3138 (the ipcPort) the hostName field in DatanodeInfo is not 
> ephemeral and should be in DatanodeID. This also allows us to fixup the issue 
> where the DatanodeID#name field is overloaded (the DN sets it to a hostname, 
> then the NN clobbers it with an IP, and then the DN clobbers it's hostname 
> field with this IP). If the DN can specify both a "name" and "hostname" in 
> the DatanodeID then this code becomes simpler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3162) BlockMap's corruptNodes count and CorruptReplicas map count is not matching.

2012-03-31 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243286#comment-13243286
 ] 

Uma Maheswara Rao G commented on HDFS-3162:
---

I don't think this problem because of append usage.

Looks like this is a race between markBlockAsCorrupt and 
processOverReplicatedBlocks.

1) NN detects over replicated block and added to invalidates list for DNn.
2) Before processing invalidates list, BlockScanner found that block corrupted 
in DNn and reported to NN.
3) Before acquiring lock, Invalidates list got processed and removed the block 
from blocksMap for DNn.
4) Now markBlockAsCorrupt started processing.

// Add this replica to corruptReplicas Map 
  corruptReplicas.addToCorruptReplicasMap(storedBlockInfo, node);
  if (countNodes(storedBlockInfo).liveReplicas()>inode.getReplication()) {
// the block is over-replicated so invalidate the replicas immediately
invalidateBlock(storedBlockInfo, node);
  } else {
// add the block to neededReplication 
updateNeededReplications(storedBlockInfo, -1, 0);
  }

since it found the enough replication and invalidateBlock. It will try to 
remove the storedBlock if line Replicas are more than one.
This call will just return, because it was already removed blocksMap.

But it was already added to corruptReplicas Map(shown in the above peice of 
code).

So, now the counts of corruptReplicas map and blockMap are different about 
corrupt replicas.

Mostly this issue exists only on branch-1.

I think this problem already addressed in Trunk.

code from trunk.

// Add replica to the data-node if it is not already there
node.addBlock(storedBlock);

// Add this replica to corruptReplicas Map
corruptReplicas.addToCorruptReplicasMap(storedBlock, node, reason);
if (countNodes(storedBlock).liveReplicas() >= inode.getReplication()) {
  // the block is over-replicated so invalidate the replicas immediately
  invalidateBlock(storedBlock, node);
}

see the first line above. If the block is not already there, it is adding to 
it. I think this should have solved the problem in trunk.


> BlockMap's corruptNodes count and CorruptReplicas map count is not matching.
> 
>
> Key: HDFS-3162
> URL: https://issues.apache.org/jira/browse/HDFS-3162
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.0
>Reporter: suja s
>Assignee: Uma Maheswara Rao G
>Priority: Minor
> Fix For: 1.0.3
>
>
> Even after invalidating the block, continuosly below log is coming
>  
> Inconsistent number of corrupt replicas for blk_1332906029734_1719blockMap 
> has 0 but corrupt replicas map has 1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3162) BlockMap's corruptNodes count and CorruptReplicas map count is not matching.

2012-03-31 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243281#comment-13243281
 ] 

Uma Maheswara Rao G commented on HDFS-3162:
---

I don't this problem because of append usage.

Looks like this a race between markBlockAsCorrupt and 
processOverReplicatedBlocks.

1) NN detects over replicated block and added to invalidates list for DNn.
2) Before processing invalidates list, BlockScanner found that block corrupted 
in DNn and reported to NN. 
3) Before acquiring lock, Invalidates list got processed and removed the block 
from blocksMap from DNn.
4) Now markBlockAsCorrupt started processing. 
   {code}
 // Add this replica to corruptReplicas Map 
  corruptReplicas.addToCorruptReplicasMap(storedBlockInfo, node);
  if (countNodes(storedBlockInfo).liveReplicas()>inode.getReplication()) {
// the block is over-replicated so invalidate the replicas immediately
invalidateBlock(storedBlockInfo, node);
  } else {
// add the block to neededReplication 
updateNeededReplications(storedBlockInfo, -1, 0);
  }
{code}
 since it found the enough replication and invalidateBlock. It will try to 
remove the storedBlock if line Replicas are more than one.
 This call will just return, because it was already removed blocksMap.

But it was already added to corruptReplicas Map(shown in the above peice of 
code).

So, now the counts of corruptReplicas map and blockMap are different about 
corrupt replicas.

Mostly this exists only on branch-1. 

I think this problem already addressed in Trunk.

code from trunk.
{code}
// Add replica to the data-node if it is not already there
node.addBlock(storedBlock);

// Add this replica to corruptReplicas Map
corruptReplicas.addToCorruptReplicasMap(storedBlock, node, reason);
if (countNodes(storedBlock).liveReplicas() >= inode.getReplication()) {
  // the block is over-replicated so invalidate the replicas immediately
  invalidateBlock(storedBlock, node);
} 
{code}

see the first line above. If the block is not already there it is adding to it. 
I think this should have solve the problem in trunk.

> BlockMap's corruptNodes count and CorruptReplicas map count is not matching.
> 
>
> Key: HDFS-3162
> URL: https://issues.apache.org/jira/browse/HDFS-3162
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.0
>Reporter: suja s
>Assignee: Uma Maheswara Rao G
>Priority: Minor
> Fix For: 1.0.3
>
>
> Even after invalidating the block, continuosly below log is coming
>  
> Inconsistent number of corrupt replicas for blk_1332906029734_1719blockMap 
> has 0 but corrupt replicas map has 1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-31 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243279#comment-13243279
 ] 

Eli Collins commented on HDFS-3000:
---

+1 looks good

> Add a public API for setting quotas
> ---
>
> Key: HDFS-3000
> URL: https://issues.apache.org/jira/browse/HDFS-3000
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-3000.patch, HDFS-3000.patch, HDFS-3000.patch, 
> HDFS-3000.patch
>
>
> Currently one can set the quota of a file or directory from the command line, 
> but if a user wants to set it programmatically, they need to use 
> DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3164) Move DatanodeInfo#hostName to DatanodeID

2012-03-31 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243272#comment-13243272
 ] 

Hudson commented on HDFS-3164:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2034 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2034/])
HDFS-3164. Move DatanodeInfo#hostName to DatanodeID. Contributed by Eli 
Collins (Revision 1307890)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1307890
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMultipleRegistrations.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestInterDatanodeProtocol.java


> Move DatanodeInfo#hostName to DatanodeID
> 
>
> Key: HDFS-3164
> URL: https://issues.apache.org/jira/browse/HDFS-3164
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.0
>
> Attachments: hdfs-3164.txt, hdfs-3164.txt
>
>
> Like HDFS-3138 (the ipcPort) the hostName field in DatanodeInfo is not 
> ephemeral and should be in DatanodeID. This also allows us to fixup the issue 
> where the DatanodeID#name field is overloaded (the DN sets it to a hostname, 
> then the NN clobbers it with an IP, and then the DN clobbers it's hostname 
> field with this IP). If the DN can specify both a "name" and "hostname" in 
> the DatanodeID then this code becomes simpler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3172) dfs.upgrade.permission is dead code

2012-03-31 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243273#comment-13243273
 ] 

Hudson commented on HDFS-3172:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2034 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2034/])
HDFS-3172. dfs.upgrade.permission is dead code. Contributed by Eli Collins 
(Revision 1307888)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1307888
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/DeprecatedProperties.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> dfs.upgrade.permission is dead code
> ---
>
> Key: HDFS-3172
> URL: https://issues.apache.org/jira/browse/HDFS-3172
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: hdfs-3172.txt
>
>
> As of HDFS-3137 dfs.upgrade.permission is dead code (was only used for 
> upgrading from old, no longer supported releases).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3164) Move DatanodeInfo#hostName to DatanodeID

2012-03-31 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243274#comment-13243274
 ] 

Hudson commented on HDFS-3164:
--

Integrated in Hadoop-Common-trunk-Commit #1959 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1959/])
HDFS-3164. Move DatanodeInfo#hostName to DatanodeID. Contributed by Eli 
Collins (Revision 1307890)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1307890
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMultipleRegistrations.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestInterDatanodeProtocol.java


> Move DatanodeInfo#hostName to DatanodeID
> 
>
> Key: HDFS-3164
> URL: https://issues.apache.org/jira/browse/HDFS-3164
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.0
>
> Attachments: hdfs-3164.txt, hdfs-3164.txt
>
>
> Like HDFS-3138 (the ipcPort) the hostName field in DatanodeInfo is not 
> ephemeral and should be in DatanodeID. This also allows us to fixup the issue 
> where the DatanodeID#name field is overloaded (the DN sets it to a hostname, 
> then the NN clobbers it with an IP, and then the DN clobbers it's hostname 
> field with this IP). If the DN can specify both a "name" and "hostname" in 
> the DatanodeID then this code becomes simpler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-31 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243276#comment-13243276
 ] 

Hadoop QA commented on HDFS-3000:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520799/HDFS-3000.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2142//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2142//console

This message is automatically generated.

> Add a public API for setting quotas
> ---
>
> Key: HDFS-3000
> URL: https://issues.apache.org/jira/browse/HDFS-3000
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-3000.patch, HDFS-3000.patch, HDFS-3000.patch, 
> HDFS-3000.patch
>
>
> Currently one can set the quota of a file or directory from the command line, 
> but if a user wants to set it programmatically, they need to use 
> DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3172) dfs.upgrade.permission is dead code

2012-03-31 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243275#comment-13243275
 ] 

Hudson commented on HDFS-3172:
--

Integrated in Hadoop-Common-trunk-Commit #1959 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1959/])
HDFS-3172. dfs.upgrade.permission is dead code. Contributed by Eli Collins 
(Revision 1307888)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1307888
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/DeprecatedProperties.apt.vm
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HdfsConfiguration.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> dfs.upgrade.permission is dead code
> ---
>
> Key: HDFS-3172
> URL: https://issues.apache.org/jira/browse/HDFS-3172
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: hdfs-3172.txt
>
>
> As of HDFS-3137 dfs.upgrade.permission is dead code (was only used for 
> upgrading from old, no longer supported releases).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3164) Move DatanodeInfo#hostName to DatanodeID

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3164:
--

  Resolution: Fixed
   Fix Version/s: 2.0.0
Target Version/s:   (was: 2.0.0)
  Status: Resolved  (was: Patch Available)

Thanks ATM, I've committed this.

> Move DatanodeInfo#hostName to DatanodeID
> 
>
> Key: HDFS-3164
> URL: https://issues.apache.org/jira/browse/HDFS-3164
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.0
>
> Attachments: hdfs-3164.txt, hdfs-3164.txt
>
>
> Like HDFS-3138 (the ipcPort) the hostName field in DatanodeInfo is not 
> ephemeral and should be in DatanodeID. This also allows us to fixup the issue 
> where the DatanodeID#name field is overloaded (the DN sets it to a hostname, 
> then the NN clobbers it with an IP, and then the DN clobbers it's hostname 
> field with this IP). If the DN can specify both a "name" and "hostname" in 
> the DatanodeID then this code becomes simpler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3070) HDFS balancer doesn't ensure that hdfs-site.xml is loaded

2012-03-31 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243271#comment-13243271
 ] 

Uma Maheswara Rao G commented on HDFS-3070:
---

Thanks Aaron,
Recalling the API name (getNameServiceUris), I got doubt from outside(couldn't 
get chance to look into code, I was outside). Never mind for this silly 
question. :-)

 I have seen that getNameServiceUris impl about adding URIs for just the 
straight conf keys in code. Its loading all variable length of keys. No issues.

> HDFS balancer doesn't ensure that hdfs-site.xml is loaded
> -
>
> Key: HDFS-3070
> URL: https://issues.apache.org/jira/browse/HDFS-3070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 2.0.0
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Fix For: 2.0.0
>
> Attachments: HDFS-3070.patch, unbalanced_nodes.png, 
> unbalanced_nodes_inservice.png
>
>
> I TeraGenerated data into DataNodes styx01 and styx02. Looking at the web UI, 
> both have over 3% disk usage.
> Attached is a screenshot of the Live Nodes web UI.
> On styx01, I run the _hdfs balancer_ command with threshold 1% and don't see 
> the blocks being balanced across all 4 datanodes (all blocks on styx01 and 
> styx02 stay put).
> HA is currently enabled.
> [schu@styx01 ~]$ hdfs haadmin -getServiceState nn1
> active
> [schu@styx01 ~]$ hdfs balancer -threshold 1
> 12/03/08 10:10:32 INFO balancer.Balancer: Using a threshold of 1.0
> 12/03/08 10:10:32 INFO balancer.Balancer: namenodes = []
> 12/03/08 10:10:32 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=1.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> Balancing took 95.0 milliseconds
> [schu@styx01 ~]$ 
> I believe with a threshold of 1% the balancer should trigger blocks being 
> moved across DataNodes, right? I am curious about the "namenode = []" from 
> the above output.
> [schu@styx01 ~]$ hadoop version
> Hadoop 0.24.0-SNAPSHOT
> Subversion 
> git://styx01.sf.cloudera.com/home/schu/hadoop-common/hadoop-common-project/hadoop-common
>  -r f6a577d697bbcd04ffbc568167c97b79479ff319
> Compiled by schu on Thu Mar  8 15:32:50 PST 2012
> From source with checksum ec971a6e7316f7fbf471b617905856b8
> From 
> http://hadoop.apache.org/hdfs/docs/r0.21.0/api/org/apache/hadoop/hdfs/server/balancer/Balancer.html:
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%. The threshold sets a target for whether the cluster is 
> balanced. A cluster is balanced if for each datanode, the utilization of the 
> node (ratio of used space at the node to total capacity of the node) differs 
> from the utilization of the (ratio of used space in the cluster to total 
> capacity of the cluster) by no more than the threshold value. The smaller the 
> threshold, the more balanced a cluster will become. It takes more time to run 
> the balancer for small threshold values. Also for a very small threshold the 
> cluster may not be able to reach the balanced state when applications write 
> and delete files concurrently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3172) dfs.upgrade.permission is dead code

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3172:
--

  Resolution: Fixed
   Fix Version/s: 2.0.0
Target Version/s:   (was: 2.0.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

No test because we're removing the option.  Thanks ATM, I've committed this.

> dfs.upgrade.permission is dead code
> ---
>
> Key: HDFS-3172
> URL: https://issues.apache.org/jira/browse/HDFS-3172
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: hdfs-3172.txt
>
>
> As of HDFS-3137 dfs.upgrade.permission is dead code (was only used for 
> upgrading from old, no longer supported releases).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3164) Move DatanodeInfo#hostName to DatanodeID

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3164:
--

Release Note: This change modifies DatanodeID, which is part of the client 
to server protocol, therefore clients must be upgraded with servers.
Hadoop Flags: Incompatible change,Reviewed

> Move DatanodeInfo#hostName to DatanodeID
> 
>
> Key: HDFS-3164
> URL: https://issues.apache.org/jira/browse/HDFS-3164
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3164.txt, hdfs-3164.txt
>
>
> Like HDFS-3138 (the ipcPort) the hostName field in DatanodeInfo is not 
> ephemeral and should be in DatanodeID. This also allows us to fixup the issue 
> where the DatanodeID#name field is overloaded (the DN sets it to a hostname, 
> then the NN clobbers it with an IP, and then the DN clobbers it's hostname 
> field with this IP). If the DN can specify both a "name" and "hostname" in 
> the DatanodeID then this code becomes simpler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2609) DataNode.getDNRegistrationByMachineName can probably be removed or simplified

2012-03-31 Thread Eli Collins (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved HDFS-2609.
---

Resolution: Fixed
  Assignee: Eli Collins

I'm fixing this in HDFS-3171

> DataNode.getDNRegistrationByMachineName can probably be removed or simplified
> -
>
> Key: HDFS-2609
> URL: https://issues.apache.org/jira/browse/HDFS-2609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Eli Collins
>
> I noticed this while working on HDFS-1971: The 
> {{getDNRegistrationByMachineName}} iterates over block pools to return a 
> given block pool's registration object based on its {{machineName}} field. 
> But, the machine name for every BPOfferService is identical - they're always 
> constructed by just calling {{DataNode.getName}}. All of the call sites for 
> this function are from tests, as well. So, maybe it's not necessary, or at 
> least it might be able to be simplified or moved to a test method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3172) dfs.upgrade.permission is dead code

2012-03-31 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243263#comment-13243263
 ] 

Hadoop QA commented on HDFS-3172:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520795/hdfs-3172.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2141//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2141//console

This message is automatically generated.

> dfs.upgrade.permission is dead code
> ---
>
> Key: HDFS-3172
> URL: https://issues.apache.org/jira/browse/HDFS-3172
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Trivial
> Attachments: hdfs-3172.txt
>
>
> As of HDFS-3137 dfs.upgrade.permission is dead code (was only used for 
> upgrading from old, no longer supported releases).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3000) Add a public API for setting quotas

2012-03-31 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3000:
-

Attachment: HDFS-3000.patch

> Add a public API for setting quotas
> ---
>
> Key: HDFS-3000
> URL: https://issues.apache.org/jira/browse/HDFS-3000
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-3000.patch, HDFS-3000.patch, HDFS-3000.patch, 
> HDFS-3000.patch
>
>
> Currently one can set the quota of a file or directory from the command line, 
> but if a user wants to set it programmatically, they need to use 
> DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-31 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243258#comment-13243258
 ] 

Aaron T. Myers commented on HDFS-3000:
--

bq. Mind chiming in on the rationale of the setQuota vs setSpaceQuota naming 
(ie the former for namespace)?

Happy to. I agree they're not the clearest names, but I did it this way to 
mirror the methods in ContentSummary, which are called getQuota and 
getSpaceQuota, respectively.

bq. Agree w Nicholas that we should move setQuota from DFS to this admin class, 
let's do that in a follow on change

Makes sense to me. Filed: HDFS-3173

bq. There's an extra "/**" in the setQuota javadoc. Otherwise, +1 looks good

Good catch. Fixed.

> Add a public API for setting quotas
> ---
>
> Key: HDFS-3000
> URL: https://issues.apache.org/jira/browse/HDFS-3000
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-3000.patch, HDFS-3000.patch, HDFS-3000.patch
>
>
> Currently one can set the quota of a file or directory from the command line, 
> but if a user wants to set it programmatically, they need to use 
> DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3173) Remove setQuota from DistributedFileSystem

2012-03-31 Thread Aaron T. Myers (Created) (JIRA)
Remove setQuota from DistributedFileSystem
--

 Key: HDFS-3173
 URL: https://issues.apache.org/jira/browse/HDFS-3173
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 2.0.0
Reporter: Aaron T. Myers


Once HDFS-3000 is committed, we'll have a public programmatic API for setting 
quotas in HDFS. We should then remove setQuota from DistributedFileSystem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3172) dfs.upgrade.permission is dead code

2012-03-31 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243255#comment-13243255
 ] 

Aaron T. Myers commented on HDFS-3172:
--

Patch looks good to me. +1 pending Jenkins.

> dfs.upgrade.permission is dead code
> ---
>
> Key: HDFS-3172
> URL: https://issues.apache.org/jira/browse/HDFS-3172
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Trivial
> Attachments: hdfs-3172.txt
>
>
> As of HDFS-3137 dfs.upgrade.permission is dead code (was only used for 
> upgrading from old, no longer supported releases).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3070) HDFS balancer doesn't ensure that hdfs-site.xml is loaded

2012-03-31 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243254#comment-13243254
 ] 

Aaron T. Myers commented on HDFS-3070:
--

bq. Other question is, it looks like Balancer completely depending on 
dfs.federation.nameservices right?

Nope. Note that DFSUtil#getNameServiceUris also adds URIs for just the straight 
conf keys, even if they're not suffixed with a nameservice ID.

> HDFS balancer doesn't ensure that hdfs-site.xml is loaded
> -
>
> Key: HDFS-3070
> URL: https://issues.apache.org/jira/browse/HDFS-3070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 2.0.0
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Fix For: 2.0.0
>
> Attachments: HDFS-3070.patch, unbalanced_nodes.png, 
> unbalanced_nodes_inservice.png
>
>
> I TeraGenerated data into DataNodes styx01 and styx02. Looking at the web UI, 
> both have over 3% disk usage.
> Attached is a screenshot of the Live Nodes web UI.
> On styx01, I run the _hdfs balancer_ command with threshold 1% and don't see 
> the blocks being balanced across all 4 datanodes (all blocks on styx01 and 
> styx02 stay put).
> HA is currently enabled.
> [schu@styx01 ~]$ hdfs haadmin -getServiceState nn1
> active
> [schu@styx01 ~]$ hdfs balancer -threshold 1
> 12/03/08 10:10:32 INFO balancer.Balancer: Using a threshold of 1.0
> 12/03/08 10:10:32 INFO balancer.Balancer: namenodes = []
> 12/03/08 10:10:32 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=1.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> Balancing took 95.0 milliseconds
> [schu@styx01 ~]$ 
> I believe with a threshold of 1% the balancer should trigger blocks being 
> moved across DataNodes, right? I am curious about the "namenode = []" from 
> the above output.
> [schu@styx01 ~]$ hadoop version
> Hadoop 0.24.0-SNAPSHOT
> Subversion 
> git://styx01.sf.cloudera.com/home/schu/hadoop-common/hadoop-common-project/hadoop-common
>  -r f6a577d697bbcd04ffbc568167c97b79479ff319
> Compiled by schu on Thu Mar  8 15:32:50 PST 2012
> From source with checksum ec971a6e7316f7fbf471b617905856b8
> From 
> http://hadoop.apache.org/hdfs/docs/r0.21.0/api/org/apache/hadoop/hdfs/server/balancer/Balancer.html:
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%. The threshold sets a target for whether the cluster is 
> balanced. A cluster is balanced if for each datanode, the utilization of the 
> node (ratio of used space at the node to total capacity of the node) differs 
> from the utilization of the (ratio of used space in the cluster to total 
> capacity of the cluster) by no more than the threshold value. The smaller the 
> threshold, the more balanced a cluster will become. It takes more time to run 
> the balancer for small threshold values. Also for a very small threshold the 
> cluster may not be able to reach the balanced state when applications write 
> and delete files concurrently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-31 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243251#comment-13243251
 ] 

Eli Collins commented on HDFS-3000:
---

- Mind chiming in on the rationale of the setQuota vs setSpaceQuota naming (ie 
the former for namespace)?
- Agree w Nicholas that we should move setQuota from DFS to this admin class, 
let's do that in a follow on change

There's an extra "/**" in the setQuota  javadoc. Otherwise, +1 looks good


> Add a public API for setting quotas
> ---
>
> Key: HDFS-3000
> URL: https://issues.apache.org/jira/browse/HDFS-3000
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-3000.patch, HDFS-3000.patch, HDFS-3000.patch
>
>
> Currently one can set the quota of a file or directory from the command line, 
> but if a user wants to set it programmatically, they need to use 
> DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3070) HDFS balancer doesn't ensure that hdfs-site.xml is loaded

2012-03-31 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243249#comment-13243249
 ] 

Uma Maheswara Rao G commented on HDFS-3070:
---

{quote}
True, but if that test class starts a MiniDFSCluster to run the balancer 
against, then the test won't detect any problem with the balancer, since the 
MiniDFSCluster will cause HdfsConfiguration to be class-loaded.
{quote}
Yah, True. I agree with you. It's my mistake. Totally forgot about 
MiniDFSCluster conf loading.

Other question is, it looks like Balancer completely depending on 
dfs.federation.nameservices right?

> HDFS balancer doesn't ensure that hdfs-site.xml is loaded
> -
>
> Key: HDFS-3070
> URL: https://issues.apache.org/jira/browse/HDFS-3070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 2.0.0
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Fix For: 2.0.0
>
> Attachments: HDFS-3070.patch, unbalanced_nodes.png, 
> unbalanced_nodes_inservice.png
>
>
> I TeraGenerated data into DataNodes styx01 and styx02. Looking at the web UI, 
> both have over 3% disk usage.
> Attached is a screenshot of the Live Nodes web UI.
> On styx01, I run the _hdfs balancer_ command with threshold 1% and don't see 
> the blocks being balanced across all 4 datanodes (all blocks on styx01 and 
> styx02 stay put).
> HA is currently enabled.
> [schu@styx01 ~]$ hdfs haadmin -getServiceState nn1
> active
> [schu@styx01 ~]$ hdfs balancer -threshold 1
> 12/03/08 10:10:32 INFO balancer.Balancer: Using a threshold of 1.0
> 12/03/08 10:10:32 INFO balancer.Balancer: namenodes = []
> 12/03/08 10:10:32 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=1.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> Balancing took 95.0 milliseconds
> [schu@styx01 ~]$ 
> I believe with a threshold of 1% the balancer should trigger blocks being 
> moved across DataNodes, right? I am curious about the "namenode = []" from 
> the above output.
> [schu@styx01 ~]$ hadoop version
> Hadoop 0.24.0-SNAPSHOT
> Subversion 
> git://styx01.sf.cloudera.com/home/schu/hadoop-common/hadoop-common-project/hadoop-common
>  -r f6a577d697bbcd04ffbc568167c97b79479ff319
> Compiled by schu on Thu Mar  8 15:32:50 PST 2012
> From source with checksum ec971a6e7316f7fbf471b617905856b8
> From 
> http://hadoop.apache.org/hdfs/docs/r0.21.0/api/org/apache/hadoop/hdfs/server/balancer/Balancer.html:
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%. The threshold sets a target for whether the cluster is 
> balanced. A cluster is balanced if for each datanode, the utilization of the 
> node (ratio of used space at the node to total capacity of the node) differs 
> from the utilization of the (ratio of used space in the cluster to total 
> capacity of the cluster) by no more than the threshold value. The smaller the 
> threshold, the more balanced a cluster will become. It takes more time to run 
> the balancer for small threshold values. Also for a very small threshold the 
> cluster may not be able to reach the balanced state when applications write 
> and delete files concurrently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3164) Move DatanodeInfo#hostName to DatanodeID

2012-03-31 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243248#comment-13243248
 ] 

Aaron T. Myers commented on HDFS-3164:
--

+1, the update patch (sans DFS_NAMENODE_UPGRADE_PERMISSION_* key changes) looks 
good to me.

> Move DatanodeInfo#hostName to DatanodeID
> 
>
> Key: HDFS-3164
> URL: https://issues.apache.org/jira/browse/HDFS-3164
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3164.txt, hdfs-3164.txt
>
>
> Like HDFS-3138 (the ipcPort) the hostName field in DatanodeInfo is not 
> ephemeral and should be in DatanodeID. This also allows us to fixup the issue 
> where the DatanodeID#name field is overloaded (the DN sets it to a hostname, 
> then the NN clobbers it with an IP, and then the DN clobbers it's hostname 
> field with this IP). If the DN can specify both a "name" and "hostname" in 
> the DatanodeID then this code becomes simpler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3164) Move DatanodeInfo#hostName to DatanodeID

2012-03-31 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243247#comment-13243247
 ] 

Hadoop QA commented on HDFS-3164:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520789/hdfs-3164.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 24 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2140//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2140//console

This message is automatically generated.

> Move DatanodeInfo#hostName to DatanodeID
> 
>
> Key: HDFS-3164
> URL: https://issues.apache.org/jira/browse/HDFS-3164
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3164.txt, hdfs-3164.txt
>
>
> Like HDFS-3138 (the ipcPort) the hostName field in DatanodeInfo is not 
> ephemeral and should be in DatanodeID. This also allows us to fixup the issue 
> where the DatanodeID#name field is overloaded (the DN sets it to a hostname, 
> then the NN clobbers it with an IP, and then the DN clobbers it's hostname 
> field with this IP). If the DN can specify both a "name" and "hostname" in 
> the DatanodeID then this code becomes simpler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3164) Move DatanodeInfo#hostName to DatanodeID

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3164:
--

Attachment: hdfs-3164.txt

Good point. Given both options have been deprecated for a while (pre 18 
releases) we can remove both. Filed HDFS-3172 with patch for that and removed 
it from this diff so it's better advertised. 

> Move DatanodeInfo#hostName to DatanodeID
> 
>
> Key: HDFS-3164
> URL: https://issues.apache.org/jira/browse/HDFS-3164
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3164.txt, hdfs-3164.txt
>
>
> Like HDFS-3138 (the ipcPort) the hostName field in DatanodeInfo is not 
> ephemeral and should be in DatanodeID. This also allows us to fixup the issue 
> where the DatanodeID#name field is overloaded (the DN sets it to a hostname, 
> then the NN clobbers it with an IP, and then the DN clobbers it's hostname 
> field with this IP). If the DN can specify both a "name" and "hostname" in 
> the DatanodeID then this code becomes simpler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3172) dfs.upgrade.permission is dead code

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3172:
--

Status: Patch Available  (was: Open)

> dfs.upgrade.permission is dead code
> ---
>
> Key: HDFS-3172
> URL: https://issues.apache.org/jira/browse/HDFS-3172
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Trivial
> Attachments: hdfs-3172.txt
>
>
> As of HDFS-3137 dfs.upgrade.permission is dead code (was only used for 
> upgrading from old, no longer supported releases).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3172) dfs.upgrade.permission is dead code

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3172:
--

Attachment: hdfs-3172.txt

Patch attached. I'll also remove the reference in DeprecatedProperties.apt.vm 
in common (not in the patch here so test-patch runs).

> dfs.upgrade.permission is dead code
> ---
>
> Key: HDFS-3172
> URL: https://issues.apache.org/jira/browse/HDFS-3172
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Trivial
> Attachments: hdfs-3172.txt
>
>
> As of HDFS-3137 dfs.upgrade.permission is dead code (was only used for 
> upgrading from old, no longer supported releases).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3172) dfs.upgrade.permission is dead code

2012-03-31 Thread Eli Collins (Created) (JIRA)
dfs.upgrade.permission is dead code
---

 Key: HDFS-3172
 URL: https://issues.apache.org/jira/browse/HDFS-3172
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Trivial


As of HDFS-3137 dfs.upgrade.permission is dead code (was only used for 
upgrading from old, no longer supported releases).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3164) Move DatanodeInfo#hostName to DatanodeID

2012-03-31 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243235#comment-13243235
 ] 

Aaron T. Myers commented on HDFS-3164:
--

bq. DFS_NAMENODE_UPGRADE_PERMISSION_KEY is still used by HdfsConfiguration to 
log the deprecation, so still needed, and is the only use.

But now it logs a deprecation of the key "dfs.upgrade.permission" in favor of 
"dfs.namenode.upgrade.permission", which is a configuration key that's not 
ready anymore. Do we have any system of deprecation which would indicate that 
both of these keys are now completely unused?

> Move DatanodeInfo#hostName to DatanodeID
> 
>
> Key: HDFS-3164
> URL: https://issues.apache.org/jira/browse/HDFS-3164
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3164.txt
>
>
> Like HDFS-3138 (the ipcPort) the hostName field in DatanodeInfo is not 
> ephemeral and should be in DatanodeID. This also allows us to fixup the issue 
> where the DatanodeID#name field is overloaded (the DN sets it to a hostname, 
> then the NN clobbers it with an IP, and then the DN clobbers it's hostname 
> field with this IP). If the DN can specify both a "name" and "hostname" in 
> the DatanodeID then this code becomes simpler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3164) Move DatanodeInfo#hostName to DatanodeID

2012-03-31 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243231#comment-13243231
 ] 

Eli Collins commented on HDFS-3164:
---

DFS_NAMENODE_UPGRADE_PERMISSION_KEY is still used by HdfsConfiguration to log 
the deprecation, so still needed, and is the only use. I noticed there's an 
unused import of it in FSN, I'll remove that as well.  Thanks ATM, I'll wait to 
hear back from jenkins before committing. I ran them locally and they passed so 
hopefully the gods are in my favor.

> Move DatanodeInfo#hostName to DatanodeID
> 
>
> Key: HDFS-3164
> URL: https://issues.apache.org/jira/browse/HDFS-3164
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3164.txt
>
>
> Like HDFS-3138 (the ipcPort) the hostName field in DatanodeInfo is not 
> ephemeral and should be in DatanodeID. This also allows us to fixup the issue 
> where the DatanodeID#name field is overloaded (the DN sets it to a hostname, 
> then the NN clobbers it with an IP, and then the DN clobbers it's hostname 
> field with this IP). If the DN can specify both a "name" and "hostname" in 
> the DatanodeID then this code becomes simpler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3070) HDFS balancer doesn't ensure that hdfs-site.xml is loaded

2012-03-31 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243226#comment-13243226
 ] 

Hudson commented on HDFS-3070:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1971 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1971/])
HDFS-3070. HDFS balancer doesn't ensure that hdfs-site.xml is loaded. 
Contributed by Aaron T. Myers. (Revision 1307841)

 Result = ABORTED
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1307841
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java


> HDFS balancer doesn't ensure that hdfs-site.xml is loaded
> -
>
> Key: HDFS-3070
> URL: https://issues.apache.org/jira/browse/HDFS-3070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 2.0.0
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Fix For: 2.0.0
>
> Attachments: HDFS-3070.patch, unbalanced_nodes.png, 
> unbalanced_nodes_inservice.png
>
>
> I TeraGenerated data into DataNodes styx01 and styx02. Looking at the web UI, 
> both have over 3% disk usage.
> Attached is a screenshot of the Live Nodes web UI.
> On styx01, I run the _hdfs balancer_ command with threshold 1% and don't see 
> the blocks being balanced across all 4 datanodes (all blocks on styx01 and 
> styx02 stay put).
> HA is currently enabled.
> [schu@styx01 ~]$ hdfs haadmin -getServiceState nn1
> active
> [schu@styx01 ~]$ hdfs balancer -threshold 1
> 12/03/08 10:10:32 INFO balancer.Balancer: Using a threshold of 1.0
> 12/03/08 10:10:32 INFO balancer.Balancer: namenodes = []
> 12/03/08 10:10:32 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=1.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> Balancing took 95.0 milliseconds
> [schu@styx01 ~]$ 
> I believe with a threshold of 1% the balancer should trigger blocks being 
> moved across DataNodes, right? I am curious about the "namenode = []" from 
> the above output.
> [schu@styx01 ~]$ hadoop version
> Hadoop 0.24.0-SNAPSHOT
> Subversion 
> git://styx01.sf.cloudera.com/home/schu/hadoop-common/hadoop-common-project/hadoop-common
>  -r f6a577d697bbcd04ffbc568167c97b79479ff319
> Compiled by schu on Thu Mar  8 15:32:50 PST 2012
> From source with checksum ec971a6e7316f7fbf471b617905856b8
> From 
> http://hadoop.apache.org/hdfs/docs/r0.21.0/api/org/apache/hadoop/hdfs/server/balancer/Balancer.html:
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%. The threshold sets a target for whether the cluster is 
> balanced. A cluster is balanced if for each datanode, the utilization of the 
> node (ratio of used space at the node to total capacity of the node) differs 
> from the utilization of the (ratio of used space in the cluster to total 
> capacity of the cluster) by no more than the threshold value. The smaller the 
> threshold, the more balanced a cluster will become. It takes more time to run 
> the balancer for small threshold values. Also for a very small threshold the 
> cluster may not be able to reach the balanced state when applications write 
> and delete files concurrently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3171) The DatanodeID "name" field is overloaded

2012-03-31 Thread Eli Collins (Created) (JIRA)
The DatanodeID "name" field is overloaded 
--

 Key: HDFS-3171
 URL: https://issues.apache.org/jira/browse/HDFS-3171
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Reporter: Eli Collins
Assignee: Eli Collins


The DatanodeID "name" field is currently overloaded, when the DN creates a 
DatanodeID to register with the NN it sets "name" to be the datanode hostname, 
which is the DN's "hostName" member. This isnot necesarily a FQDN, it is either 
set explicitly or determined by the DNS class, which could return the machine's 
hostname or the result of a DNS lookup, if configured to do so. The NN then 
clobbers the "name" field of the DatanodeID with the IP part of the new 
DatanodeID "name" field it creates (and sets the DatanodeID "hostName" field to 
the reported "name"). The DN gets the DatanodeID back from the NN and clobbers 
its "hostName" member with the "name" field of the returned DatanodeID. This 
makes the code hard to reason about eg DN#getMachine name sometimes returns a 
hostname and sometimes not, depending on when it's called in sequence with the 
registration. Ditto for uses of the "name" field. I think these contortions 
were originally performed because the DatanodeID didn't have a hostName field 
(it was part of DatanodeInfo) and so there was no way to communicate both at 
the same time. Now that the hostName field is in DatanodeID (as of HDFS-3164) 
we can establish the invariant that the "name" field always and only has an IP 
address and the "hostName" field always and only has a hostname.

In HDFS-3144 I'm going to rename the "name" field so its clear that it contains 
an IP address. The above is enough scope for one change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3164) Move DatanodeInfo#hostName to DatanodeID

2012-03-31 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243221#comment-13243221
 ] 

Aaron T. Myers commented on HDFS-3164:
--

One question: given that you removed DFS_NAMENODE_UPGRADE_PERMISSION_DEFAULT, 
can we not also remove DFS_NAMENODE_UPGRADE_PERMISSION_KEY?

Other than that, the patch looks good to me. +1 pending Jenkins and an answer 
to the question above.

> Move DatanodeInfo#hostName to DatanodeID
> 
>
> Key: HDFS-3164
> URL: https://issues.apache.org/jira/browse/HDFS-3164
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3164.txt
>
>
> Like HDFS-3138 (the ipcPort) the hostName field in DatanodeInfo is not 
> ephemeral and should be in DatanodeID. This also allows us to fixup the issue 
> where the DatanodeID#name field is overloaded (the DN sets it to a hostname, 
> then the NN clobbers it with an IP, and then the DN clobbers it's hostname 
> field with this IP). If the DN can specify both a "name" and "hostname" in 
> the DatanodeID then this code becomes simpler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3070) HDFS balancer doesn't ensure that hdfs-site.xml is loaded

2012-03-31 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243215#comment-13243215
 ] 

Hudson commented on HDFS-3070:
--

Integrated in Hadoop-Common-trunk-Commit #1958 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1958/])
HDFS-3070. HDFS balancer doesn't ensure that hdfs-site.xml is loaded. 
Contributed by Aaron T. Myers. (Revision 1307841)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1307841
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java


> HDFS balancer doesn't ensure that hdfs-site.xml is loaded
> -
>
> Key: HDFS-3070
> URL: https://issues.apache.org/jira/browse/HDFS-3070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 2.0.0
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Fix For: 2.0.0
>
> Attachments: HDFS-3070.patch, unbalanced_nodes.png, 
> unbalanced_nodes_inservice.png
>
>
> I TeraGenerated data into DataNodes styx01 and styx02. Looking at the web UI, 
> both have over 3% disk usage.
> Attached is a screenshot of the Live Nodes web UI.
> On styx01, I run the _hdfs balancer_ command with threshold 1% and don't see 
> the blocks being balanced across all 4 datanodes (all blocks on styx01 and 
> styx02 stay put).
> HA is currently enabled.
> [schu@styx01 ~]$ hdfs haadmin -getServiceState nn1
> active
> [schu@styx01 ~]$ hdfs balancer -threshold 1
> 12/03/08 10:10:32 INFO balancer.Balancer: Using a threshold of 1.0
> 12/03/08 10:10:32 INFO balancer.Balancer: namenodes = []
> 12/03/08 10:10:32 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=1.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> Balancing took 95.0 milliseconds
> [schu@styx01 ~]$ 
> I believe with a threshold of 1% the balancer should trigger blocks being 
> moved across DataNodes, right? I am curious about the "namenode = []" from 
> the above output.
> [schu@styx01 ~]$ hadoop version
> Hadoop 0.24.0-SNAPSHOT
> Subversion 
> git://styx01.sf.cloudera.com/home/schu/hadoop-common/hadoop-common-project/hadoop-common
>  -r f6a577d697bbcd04ffbc568167c97b79479ff319
> Compiled by schu on Thu Mar  8 15:32:50 PST 2012
> From source with checksum ec971a6e7316f7fbf471b617905856b8
> From 
> http://hadoop.apache.org/hdfs/docs/r0.21.0/api/org/apache/hadoop/hdfs/server/balancer/Balancer.html:
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%. The threshold sets a target for whether the cluster is 
> balanced. A cluster is balanced if for each datanode, the utilization of the 
> node (ratio of used space at the node to total capacity of the node) differs 
> from the utilization of the (ratio of used space in the cluster to total 
> capacity of the cluster) by no more than the threshold value. The smaller the 
> threshold, the more balanced a cluster will become. It takes more time to run 
> the balancer for small threshold values. Also for a very small threshold the 
> cluster may not be able to reach the balanced state when applications write 
> and delete files concurrently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3070) HDFS balancer doesn't ensure that hdfs-site.xml is loaded

2012-03-31 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243214#comment-13243214
 ] 

Hudson commented on HDFS-3070:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2033 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2033/])
HDFS-3070. HDFS balancer doesn't ensure that hdfs-site.xml is loaded. 
Contributed by Aaron T. Myers. (Revision 1307841)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1307841
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java


> HDFS balancer doesn't ensure that hdfs-site.xml is loaded
> -
>
> Key: HDFS-3070
> URL: https://issues.apache.org/jira/browse/HDFS-3070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 2.0.0
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Fix For: 2.0.0
>
> Attachments: HDFS-3070.patch, unbalanced_nodes.png, 
> unbalanced_nodes_inservice.png
>
>
> I TeraGenerated data into DataNodes styx01 and styx02. Looking at the web UI, 
> both have over 3% disk usage.
> Attached is a screenshot of the Live Nodes web UI.
> On styx01, I run the _hdfs balancer_ command with threshold 1% and don't see 
> the blocks being balanced across all 4 datanodes (all blocks on styx01 and 
> styx02 stay put).
> HA is currently enabled.
> [schu@styx01 ~]$ hdfs haadmin -getServiceState nn1
> active
> [schu@styx01 ~]$ hdfs balancer -threshold 1
> 12/03/08 10:10:32 INFO balancer.Balancer: Using a threshold of 1.0
> 12/03/08 10:10:32 INFO balancer.Balancer: namenodes = []
> 12/03/08 10:10:32 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=1.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> Balancing took 95.0 milliseconds
> [schu@styx01 ~]$ 
> I believe with a threshold of 1% the balancer should trigger blocks being 
> moved across DataNodes, right? I am curious about the "namenode = []" from 
> the above output.
> [schu@styx01 ~]$ hadoop version
> Hadoop 0.24.0-SNAPSHOT
> Subversion 
> git://styx01.sf.cloudera.com/home/schu/hadoop-common/hadoop-common-project/hadoop-common
>  -r f6a577d697bbcd04ffbc568167c97b79479ff319
> Compiled by schu on Thu Mar  8 15:32:50 PST 2012
> From source with checksum ec971a6e7316f7fbf471b617905856b8
> From 
> http://hadoop.apache.org/hdfs/docs/r0.21.0/api/org/apache/hadoop/hdfs/server/balancer/Balancer.html:
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%. The threshold sets a target for whether the cluster is 
> balanced. A cluster is balanced if for each datanode, the utilization of the 
> node (ratio of used space at the node to total capacity of the node) differs 
> from the utilization of the (ratio of used space in the cluster to total 
> capacity of the cluster) by no more than the threshold value. The smaller the 
> threshold, the more balanced a cluster will become. It takes more time to run 
> the balancer for small threshold values. Also for a very small threshold the 
> cluster may not be able to reach the balanced state when applications write 
> and delete files concurrently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3070) HDFS balancer doesn't ensure that hdfs-site.xml is loaded

2012-03-31 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3070:
-

   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to branch-2 and trunk.

Thanks a lot for the reviews, Uma and Eli.

> HDFS balancer doesn't ensure that hdfs-site.xml is loaded
> -
>
> Key: HDFS-3070
> URL: https://issues.apache.org/jira/browse/HDFS-3070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 2.0.0
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Fix For: 2.0.0
>
> Attachments: HDFS-3070.patch, unbalanced_nodes.png, 
> unbalanced_nodes_inservice.png
>
>
> I TeraGenerated data into DataNodes styx01 and styx02. Looking at the web UI, 
> both have over 3% disk usage.
> Attached is a screenshot of the Live Nodes web UI.
> On styx01, I run the _hdfs balancer_ command with threshold 1% and don't see 
> the blocks being balanced across all 4 datanodes (all blocks on styx01 and 
> styx02 stay put).
> HA is currently enabled.
> [schu@styx01 ~]$ hdfs haadmin -getServiceState nn1
> active
> [schu@styx01 ~]$ hdfs balancer -threshold 1
> 12/03/08 10:10:32 INFO balancer.Balancer: Using a threshold of 1.0
> 12/03/08 10:10:32 INFO balancer.Balancer: namenodes = []
> 12/03/08 10:10:32 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=1.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> Balancing took 95.0 milliseconds
> [schu@styx01 ~]$ 
> I believe with a threshold of 1% the balancer should trigger blocks being 
> moved across DataNodes, right? I am curious about the "namenode = []" from 
> the above output.
> [schu@styx01 ~]$ hadoop version
> Hadoop 0.24.0-SNAPSHOT
> Subversion 
> git://styx01.sf.cloudera.com/home/schu/hadoop-common/hadoop-common-project/hadoop-common
>  -r f6a577d697bbcd04ffbc568167c97b79479ff319
> Compiled by schu on Thu Mar  8 15:32:50 PST 2012
> From source with checksum ec971a6e7316f7fbf471b617905856b8
> From 
> http://hadoop.apache.org/hdfs/docs/r0.21.0/api/org/apache/hadoop/hdfs/server/balancer/Balancer.html:
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%. The threshold sets a target for whether the cluster is 
> balanced. A cluster is balanced if for each datanode, the utilization of the 
> node (ratio of used space at the node to total capacity of the node) differs 
> from the utilization of the (ratio of used space in the cluster to total 
> capacity of the cluster) by no more than the threshold value. The smaller the 
> threshold, the more balanced a cluster will become. It takes more time to run 
> the balancer for small threshold values. Also for a very small threshold the 
> cluster may not be able to reach the balanced state when applications write 
> and delete files concurrently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3070) HDFS balancer doesn't ensure that hdfs-site.xml is loaded

2012-03-31 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243208#comment-13243208
 ] 

Aaron T. Myers commented on HDFS-3070:
--

bq. I remember, in our Jenkins it will spawn separate JVM for each test class. 
no?

True, but if that test class starts a MiniDFSCluster to run the balancer 
against, then the test won't detect any problem with the balancer, since the 
MiniDFSCluster will cause HdfsConfiguration to be class-loaded.

Uma, if I'm misunderstanding what you're proposing, perhaps you could post some 
code to illustrate how this would work? If you do, I'll be sure to review it 
promptly.

In the mean time, I'm going to go ahead and commit this patch since everyone 
seems to agree that this will fix the bug.

> HDFS balancer doesn't ensure that hdfs-site.xml is loaded
> -
>
> Key: HDFS-3070
> URL: https://issues.apache.org/jira/browse/HDFS-3070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 2.0.0
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Attachments: HDFS-3070.patch, unbalanced_nodes.png, 
> unbalanced_nodes_inservice.png
>
>
> I TeraGenerated data into DataNodes styx01 and styx02. Looking at the web UI, 
> both have over 3% disk usage.
> Attached is a screenshot of the Live Nodes web UI.
> On styx01, I run the _hdfs balancer_ command with threshold 1% and don't see 
> the blocks being balanced across all 4 datanodes (all blocks on styx01 and 
> styx02 stay put).
> HA is currently enabled.
> [schu@styx01 ~]$ hdfs haadmin -getServiceState nn1
> active
> [schu@styx01 ~]$ hdfs balancer -threshold 1
> 12/03/08 10:10:32 INFO balancer.Balancer: Using a threshold of 1.0
> 12/03/08 10:10:32 INFO balancer.Balancer: namenodes = []
> 12/03/08 10:10:32 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=1.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> Balancing took 95.0 milliseconds
> [schu@styx01 ~]$ 
> I believe with a threshold of 1% the balancer should trigger blocks being 
> moved across DataNodes, right? I am curious about the "namenode = []" from 
> the above output.
> [schu@styx01 ~]$ hadoop version
> Hadoop 0.24.0-SNAPSHOT
> Subversion 
> git://styx01.sf.cloudera.com/home/schu/hadoop-common/hadoop-common-project/hadoop-common
>  -r f6a577d697bbcd04ffbc568167c97b79479ff319
> Compiled by schu on Thu Mar  8 15:32:50 PST 2012
> From source with checksum ec971a6e7316f7fbf471b617905856b8
> From 
> http://hadoop.apache.org/hdfs/docs/r0.21.0/api/org/apache/hadoop/hdfs/server/balancer/Balancer.html:
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%. The threshold sets a target for whether the cluster is 
> balanced. A cluster is balanced if for each datanode, the utilization of the 
> node (ratio of used space at the node to total capacity of the node) differs 
> from the utilization of the (ratio of used space in the cluster to total 
> capacity of the cluster) by no more than the threshold value. The smaller the 
> threshold, the more balanced a cluster will become. It takes more time to run 
> the balancer for small threshold values. Also for a very small threshold the 
> cluster may not be able to reach the balanced state when applications write 
> and delete files concurrently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3070) HDFS balancer doesn't ensure that hdfs-site.xml is loaded

2012-03-31 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243201#comment-13243201
 ] 

Eli Collins commented on HDFS-3070:
---

+1 to HDFS-3070.patch, looks good

> HDFS balancer doesn't ensure that hdfs-site.xml is loaded
> -
>
> Key: HDFS-3070
> URL: https://issues.apache.org/jira/browse/HDFS-3070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 2.0.0
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Attachments: HDFS-3070.patch, unbalanced_nodes.png, 
> unbalanced_nodes_inservice.png
>
>
> I TeraGenerated data into DataNodes styx01 and styx02. Looking at the web UI, 
> both have over 3% disk usage.
> Attached is a screenshot of the Live Nodes web UI.
> On styx01, I run the _hdfs balancer_ command with threshold 1% and don't see 
> the blocks being balanced across all 4 datanodes (all blocks on styx01 and 
> styx02 stay put).
> HA is currently enabled.
> [schu@styx01 ~]$ hdfs haadmin -getServiceState nn1
> active
> [schu@styx01 ~]$ hdfs balancer -threshold 1
> 12/03/08 10:10:32 INFO balancer.Balancer: Using a threshold of 1.0
> 12/03/08 10:10:32 INFO balancer.Balancer: namenodes = []
> 12/03/08 10:10:32 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=1.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> Balancing took 95.0 milliseconds
> [schu@styx01 ~]$ 
> I believe with a threshold of 1% the balancer should trigger blocks being 
> moved across DataNodes, right? I am curious about the "namenode = []" from 
> the above output.
> [schu@styx01 ~]$ hadoop version
> Hadoop 0.24.0-SNAPSHOT
> Subversion 
> git://styx01.sf.cloudera.com/home/schu/hadoop-common/hadoop-common-project/hadoop-common
>  -r f6a577d697bbcd04ffbc568167c97b79479ff319
> Compiled by schu on Thu Mar  8 15:32:50 PST 2012
> From source with checksum ec971a6e7316f7fbf471b617905856b8
> From 
> http://hadoop.apache.org/hdfs/docs/r0.21.0/api/org/apache/hadoop/hdfs/server/balancer/Balancer.html:
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%. The threshold sets a target for whether the cluster is 
> balanced. A cluster is balanced if for each datanode, the utilization of the 
> node (ratio of used space at the node to total capacity of the node) differs 
> from the utilization of the (ratio of used space in the cluster to total 
> capacity of the cluster) by no more than the threshold value. The smaller the 
> threshold, the more balanced a cluster will become. It takes more time to run 
> the balancer for small threshold values. Also for a very small threshold the 
> cluster may not be able to reach the balanced state when applications write 
> and delete files concurrently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3164) Move DatanodeInfo#hostName to DatanodeID

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3164:
--

Attachment: hdfs-3164.txt

Patch attached.

- Moves the hostName field and getter to DatanodeID. Note getHostName still 
returns getHost if hostName is empty, will address that in HDFS-3144, this 
change is just code motion
- Restrict visibility of DatanodeID fields and use getters/setters
- Remove FSN#defaultPermission dead code (as of HDFS-3137)
- Use Text instead of DeprecatedUTF8 for DatanodeID string fields since we're 
not serializing/deserializing them from images (as of HDFS-3137)

> Move DatanodeInfo#hostName to DatanodeID
> 
>
> Key: HDFS-3164
> URL: https://issues.apache.org/jira/browse/HDFS-3164
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3164.txt
>
>
> Like HDFS-3138 (the ipcPort) the hostName field in DatanodeInfo is not 
> ephemeral and should be in DatanodeID. This also allows us to fixup the issue 
> where the DatanodeID#name field is overloaded (the DN sets it to a hostname, 
> then the NN clobbers it with an IP, and then the DN clobbers it's hostname 
> field with this IP). If the DN can specify both a "name" and "hostname" in 
> the DatanodeID then this code becomes simpler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3164) Move DatanodeInfo#hostName to DatanodeID

2012-03-31 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3164:
--

Status: Patch Available  (was: Open)

> Move DatanodeInfo#hostName to DatanodeID
> 
>
> Key: HDFS-3164
> URL: https://issues.apache.org/jira/browse/HDFS-3164
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: data-node
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-3164.txt
>
>
> Like HDFS-3138 (the ipcPort) the hostName field in DatanodeInfo is not 
> ephemeral and should be in DatanodeID. This also allows us to fixup the issue 
> where the DatanodeID#name field is overloaded (the DN sets it to a hostname, 
> then the NN clobbers it with an IP, and then the DN clobbers it's hostname 
> field with this IP). If the DN can specify both a "name" and "hostname" in 
> the DatanodeID then this code becomes simpler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1599) Umbrella Jira for Improving HBASE support in HDFS

2012-03-31 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243178#comment-13243178
 ] 

Uma Maheswara Rao G commented on HDFS-1599:
---

10)getFileLength from DFSInputStream

> Umbrella Jira for Improving HBASE support in HDFS
> -
>
> Key: HDFS-1599
> URL: https://issues.apache.org/jira/browse/HDFS-1599
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sanjay Radia
>
> Umbrella Jira for improved HBase support in HDFS

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3108) [UI] Few Namenode links are not working

2012-03-31 Thread amith (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243176#comment-13243176
 ] 

amith commented on HDFS-3108:
-

Hi Brahma
Scenario 1 is similar to HDFS-2025
Scenario 2 is occurring because of the incorrect/no Host Mapping in DN about NN.

can u please verify and report my observation is correct:)

> [UI] Few Namenode links are not working
> ---
>
> Key: HDFS-3108
> URL: https://issues.apache.org/jira/browse/HDFS-3108
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.0, 0.23.1
>Reporter: Brahma Reddy Battula
>Priority: Minor
> Fix For: 0.23.3
>
> Attachments: Scenario2_Trace.txt
>
>
> Scenario 1
> ==
> Once tail a file from UI and click on "Go Back to File View",I am getting 
> HTTP ERROR 404
> Scenario 2
> ===
> Frequently I am getting following execption If a click on (BrowseFileSystem 
> or anyfile)java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> HOST-10-18-40-24

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3138) Move DatanodeInfo#ipcPort to DatanodeID

2012-03-31 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243157#comment-13243157
 ] 

Hudson commented on HDFS-3138:
--

Integrated in Hadoop-Mapreduce-trunk #1036 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1036/])
HDFS-3138. Move DatanodeInfo#ipcPort to DatanodeID. Contributed by Eli 
Collins (Revision 1307553)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1307553
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java


> Move DatanodeInfo#ipcPort to DatanodeID
> ---
>
> Key: HDFS-3138
> URL: https://issues.apache.org/jira/browse/HDFS-3138
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.0
>
> Attachments: hdfs-3138.txt, hdfs-3138.txt
>
>
> We can fix the following TODO once HDFS-3137 is committed.
> {code}
> //TODO: move it to DatanodeID once DatanodeID is not stored in FSImage
> out.writeShort(ipcPort);
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3138) Move DatanodeInfo#ipcPort to DatanodeID

2012-03-31 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243133#comment-13243133
 ] 

Hudson commented on HDFS-3138:
--

Integrated in Hadoop-Hdfs-trunk #1001 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1001/])
HDFS-3138. Move DatanodeInfo#ipcPort to DatanodeID. Contributed by Eli 
Collins (Revision 1307553)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1307553
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java


> Move DatanodeInfo#ipcPort to DatanodeID
> ---
>
> Key: HDFS-3138
> URL: https://issues.apache.org/jira/browse/HDFS-3138
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.0
>
> Attachments: hdfs-3138.txt, hdfs-3138.txt
>
>
> We can fix the following TODO once HDFS-3137 is committed.
> {code}
> //TODO: move it to DatanodeID once DatanodeID is not stored in FSImage
> out.writeShort(ipcPort);
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1599) Umbrella Jira for Improving HBASE support in HDFS

2012-03-31 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243105#comment-13243105
 ] 

Uma Maheswara Rao G commented on HDFS-1599:
---

@Nicholas,

 Currently I can see below points, where Hbase is invoking the HDFS APIs.

1) Accessing the Cache class from FileSystem
  {code}
 field.setAccessible(true);
 Field cacheField = FileSystem.class.getDeclaredField("CACHE");
 cacheField.setAccessible(true);
 Object cacheInstance = cacheField.get(fs);
 hdfsClientFinalizer = (Thread)field.get(cacheInstance)
  {code}

2) Invoking the getJar method from JarFinder
{code}
  Class jarFinder = Class.forName("org.apache.hadoop.util.JarFinder");
  // hadoop-0.23 has a JarFinder class that will create the jar
  // if it doesn't exist.  Note that this is needed to run the mapreduce
  // unit tests post-0.23, because mapreduce v2 requires the relevant jars
  // to be in the mr cluster to do output, split, etc.  At unit test time,
  // the hbase jars do not exist, so we need to create some.  Note that we
  // can safely fall back to findContainingJars for pre-0.23 mapreduce.
  Method m = jarFinder.getMethod("getJar", Class.class);
{code}

3) accessing the getNumCurrentReplicas from DFSOutPutStream
4) accessing the creatWriter method from SequenceFile.Writer
5) accessing the syncFS method from SequenceFile writer
6) hflush apis
7) accessing the 'out' variable from FSDataOutPutStream
8) recoverLease api from DistributedFilesystem
9) using org.apache.hadoop.hdfs.protocol.FSConstants.SafeModeAction.SAFEMODE_GET
   Hbase currently broken with 23 version because of this constant usage.



> Umbrella Jira for Improving HBASE support in HDFS
> -
>
> Key: HDFS-1599
> URL: https://issues.apache.org/jira/browse/HDFS-1599
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sanjay Radia
>
> Umbrella Jira for improved HBase support in HDFS

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-31 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243074#comment-13243074
 ] 

Hadoop QA commented on HDFS-3000:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520771/HDFS-3000.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2139//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2139//console

This message is automatically generated.

> Add a public API for setting quotas
> ---
>
> Key: HDFS-3000
> URL: https://issues.apache.org/jira/browse/HDFS-3000
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-3000.patch, HDFS-3000.patch, HDFS-3000.patch
>
>
> Currently one can set the quota of a file or directory from the command line, 
> but if a user wants to set it programmatically, they need to use 
> DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3070) HDFS balancer doesn't ensure that hdfs-site.xml is loaded

2012-03-31 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243073#comment-13243073
 ] 

Uma Maheswara Rao G commented on HDFS-3070:
---

{quote}
So, the only way to write a test that would catch this would be if from the 
tests we forked a new JVM to run the balancer, and examining the effects.
{quote}
I remember, in our Jenkins it will spawn separate JVM for each test class. no?

{quote}
Doing that doesn't seem worth it to me, for something that's such a simple 
bug.{quote} I agree, this is very simple fix. But there is a functional effect.

If we have the test in above suggested way, that would have caught while 
re-factoring for Federation and introducing the dependency on rpc addresses to 
start. I am not very much insisting to change. If you feel not required, you 
can leave it. I won't block, because of simple test change.

+1

> HDFS balancer doesn't ensure that hdfs-site.xml is loaded
> -
>
> Key: HDFS-3070
> URL: https://issues.apache.org/jira/browse/HDFS-3070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 2.0.0
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Attachments: HDFS-3070.patch, unbalanced_nodes.png, 
> unbalanced_nodes_inservice.png
>
>
> I TeraGenerated data into DataNodes styx01 and styx02. Looking at the web UI, 
> both have over 3% disk usage.
> Attached is a screenshot of the Live Nodes web UI.
> On styx01, I run the _hdfs balancer_ command with threshold 1% and don't see 
> the blocks being balanced across all 4 datanodes (all blocks on styx01 and 
> styx02 stay put).
> HA is currently enabled.
> [schu@styx01 ~]$ hdfs haadmin -getServiceState nn1
> active
> [schu@styx01 ~]$ hdfs balancer -threshold 1
> 12/03/08 10:10:32 INFO balancer.Balancer: Using a threshold of 1.0
> 12/03/08 10:10:32 INFO balancer.Balancer: namenodes = []
> 12/03/08 10:10:32 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=1.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> Balancing took 95.0 milliseconds
> [schu@styx01 ~]$ 
> I believe with a threshold of 1% the balancer should trigger blocks being 
> moved across DataNodes, right? I am curious about the "namenode = []" from 
> the above output.
> [schu@styx01 ~]$ hadoop version
> Hadoop 0.24.0-SNAPSHOT
> Subversion 
> git://styx01.sf.cloudera.com/home/schu/hadoop-common/hadoop-common-project/hadoop-common
>  -r f6a577d697bbcd04ffbc568167c97b79479ff319
> Compiled by schu on Thu Mar  8 15:32:50 PST 2012
> From source with checksum ec971a6e7316f7fbf471b617905856b8
> From 
> http://hadoop.apache.org/hdfs/docs/r0.21.0/api/org/apache/hadoop/hdfs/server/balancer/Balancer.html:
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%. The threshold sets a target for whether the cluster is 
> balanced. A cluster is balanced if for each datanode, the utilization of the 
> node (ratio of used space at the node to total capacity of the node) differs 
> from the utilization of the (ratio of used space in the cluster to total 
> capacity of the cluster) by no more than the threshold value. The smaller the 
> threshold, the more balanced a cluster will become. It takes more time to run 
> the balancer for small threshold values. Also for a very small threshold the 
> cluster may not be able to reach the balanced state when applications write 
> and delete files concurrently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3167) CLI-based driver for MiniDFSCluster

2012-03-31 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243072#comment-13243072
 ] 

Hadoop QA commented on HDFS-3167:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520769/HDFS-3167.1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 5 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2138//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2138//console

This message is automatically generated.

> CLI-based driver for MiniDFSCluster
> ---
>
> Key: HDFS-3167
> URL: https://issues.apache.org/jira/browse/HDFS-3167
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 2.0.0
>Reporter: Henry Robinson
>Assignee: Henry Robinson
>Priority: Minor
> Attachments: HDFS-3167.1.patch, HDFS-3167.patch
>
>
> Picking up a thread again from MAPREDUCE-987, I've found it very useful to 
> have a CLI driver for running a single-process DFS cluster, particularly when 
> developing features in HDFS clients. For example, being able to spin up a 
> local cluster easily was tremendously useful for correctness testing of 
> HDFS-2834. 
> I'd like to contribute a class based on the patch for MAPREDUCE-987 we've 
> been using fairly extensively. Only for DFS, not MR since much has changed 
> MR-side since the original patch. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3070) HDFS balancer doesn't ensure that hdfs-site.xml is loaded

2012-03-31 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3070:
-

Summary: HDFS balancer doesn't ensure that hdfs-site.xml is loaded  (was: 
hdfs balancer doesn't balance blocks between datanodes)

> HDFS balancer doesn't ensure that hdfs-site.xml is loaded
> -
>
> Key: HDFS-3070
> URL: https://issues.apache.org/jira/browse/HDFS-3070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 2.0.0
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Attachments: HDFS-3070.patch, unbalanced_nodes.png, 
> unbalanced_nodes_inservice.png
>
>
> I TeraGenerated data into DataNodes styx01 and styx02. Looking at the web UI, 
> both have over 3% disk usage.
> Attached is a screenshot of the Live Nodes web UI.
> On styx01, I run the _hdfs balancer_ command with threshold 1% and don't see 
> the blocks being balanced across all 4 datanodes (all blocks on styx01 and 
> styx02 stay put).
> HA is currently enabled.
> [schu@styx01 ~]$ hdfs haadmin -getServiceState nn1
> active
> [schu@styx01 ~]$ hdfs balancer -threshold 1
> 12/03/08 10:10:32 INFO balancer.Balancer: Using a threshold of 1.0
> 12/03/08 10:10:32 INFO balancer.Balancer: namenodes = []
> 12/03/08 10:10:32 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=1.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> Balancing took 95.0 milliseconds
> [schu@styx01 ~]$ 
> I believe with a threshold of 1% the balancer should trigger blocks being 
> moved across DataNodes, right? I am curious about the "namenode = []" from 
> the above output.
> [schu@styx01 ~]$ hadoop version
> Hadoop 0.24.0-SNAPSHOT
> Subversion 
> git://styx01.sf.cloudera.com/home/schu/hadoop-common/hadoop-common-project/hadoop-common
>  -r f6a577d697bbcd04ffbc568167c97b79479ff319
> Compiled by schu on Thu Mar  8 15:32:50 PST 2012
> From source with checksum ec971a6e7316f7fbf471b617905856b8
> From 
> http://hadoop.apache.org/hdfs/docs/r0.21.0/api/org/apache/hadoop/hdfs/server/balancer/Balancer.html:
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%. The threshold sets a target for whether the cluster is 
> balanced. A cluster is balanced if for each datanode, the utilization of the 
> node (ratio of used space at the node to total capacity of the node) differs 
> from the utilization of the (ratio of used space in the cluster to total 
> capacity of the cluster) by no more than the threshold value. The smaller the 
> threshold, the more balanced a cluster will become. It takes more time to run 
> the balancer for small threshold values. Also for a very small threshold the 
> cluster may not be able to reach the balanced state when applications write 
> and delete files concurrently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3070) hdfs balancer doesn't balance blocks between datanodes

2012-03-31 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243067#comment-13243067
 ] 

Aaron T. Myers commented on HDFS-3070:
--

Hi Uma,

bq. To catch this bug in tests itself, I would suggest to call the 
runBalancerCLI...

I don't think this will actually expose the bug. The trouble isn't that the 
object isn't an instance of HdfsConfiguration, but rather that 
HdfsConfiguration never gets class-loaded and therefore the static initializer 
that add hdfs-default.xml and hdfs-site.xml as resources never gets called. 
Another perfectly valid solution would have been to continue to pass "null" for 
the configuration object, but to call HdfsConfiguration#init() somewhere 
(anywhere) in the Balancer. So, the only way to write a test that would catch 
this would be if from the tests we forked a new JVM to run the balancer, and 
examining the effects. Doing that doesn't seem worth it to me, for something 
that's such a simple bug.


bq. BTW, could you please edit the issue title?

Good idea. Will do.

> hdfs balancer doesn't balance blocks between datanodes
> --
>
> Key: HDFS-3070
> URL: https://issues.apache.org/jira/browse/HDFS-3070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 2.0.0
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Attachments: HDFS-3070.patch, unbalanced_nodes.png, 
> unbalanced_nodes_inservice.png
>
>
> I TeraGenerated data into DataNodes styx01 and styx02. Looking at the web UI, 
> both have over 3% disk usage.
> Attached is a screenshot of the Live Nodes web UI.
> On styx01, I run the _hdfs balancer_ command with threshold 1% and don't see 
> the blocks being balanced across all 4 datanodes (all blocks on styx01 and 
> styx02 stay put).
> HA is currently enabled.
> [schu@styx01 ~]$ hdfs haadmin -getServiceState nn1
> active
> [schu@styx01 ~]$ hdfs balancer -threshold 1
> 12/03/08 10:10:32 INFO balancer.Balancer: Using a threshold of 1.0
> 12/03/08 10:10:32 INFO balancer.Balancer: namenodes = []
> 12/03/08 10:10:32 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=1.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> Balancing took 95.0 milliseconds
> [schu@styx01 ~]$ 
> I believe with a threshold of 1% the balancer should trigger blocks being 
> moved across DataNodes, right? I am curious about the "namenode = []" from 
> the above output.
> [schu@styx01 ~]$ hadoop version
> Hadoop 0.24.0-SNAPSHOT
> Subversion 
> git://styx01.sf.cloudera.com/home/schu/hadoop-common/hadoop-common-project/hadoop-common
>  -r f6a577d697bbcd04ffbc568167c97b79479ff319
> Compiled by schu on Thu Mar  8 15:32:50 PST 2012
> From source with checksum ec971a6e7316f7fbf471b617905856b8
> From 
> http://hadoop.apache.org/hdfs/docs/r0.21.0/api/org/apache/hadoop/hdfs/server/balancer/Balancer.html:
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%. The threshold sets a target for whether the cluster is 
> balanced. A cluster is balanced if for each datanode, the utilization of the 
> node (ratio of used space at the node to total capacity of the node) differs 
> from the utilization of the (ratio of used space in the cluster to total 
> capacity of the cluster) by no more than the threshold value. The smaller the 
> threshold, the more balanced a cluster will become. It takes more time to run 
> the balancer for small threshold values. Also for a very small threshold the 
> cluster may not be able to reach the balanced state when applications write 
> and delete files concurrently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3070) hdfs balancer doesn't balance blocks between datanodes

2012-03-31 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243063#comment-13243063
 ] 

Uma Maheswara Rao G commented on HDFS-3070:
---

BTW, could you please edit the issue title?

> hdfs balancer doesn't balance blocks between datanodes
> --
>
> Key: HDFS-3070
> URL: https://issues.apache.org/jira/browse/HDFS-3070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 2.0.0
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Attachments: HDFS-3070.patch, unbalanced_nodes.png, 
> unbalanced_nodes_inservice.png
>
>
> I TeraGenerated data into DataNodes styx01 and styx02. Looking at the web UI, 
> both have over 3% disk usage.
> Attached is a screenshot of the Live Nodes web UI.
> On styx01, I run the _hdfs balancer_ command with threshold 1% and don't see 
> the blocks being balanced across all 4 datanodes (all blocks on styx01 and 
> styx02 stay put).
> HA is currently enabled.
> [schu@styx01 ~]$ hdfs haadmin -getServiceState nn1
> active
> [schu@styx01 ~]$ hdfs balancer -threshold 1
> 12/03/08 10:10:32 INFO balancer.Balancer: Using a threshold of 1.0
> 12/03/08 10:10:32 INFO balancer.Balancer: namenodes = []
> 12/03/08 10:10:32 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=1.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> Balancing took 95.0 milliseconds
> [schu@styx01 ~]$ 
> I believe with a threshold of 1% the balancer should trigger blocks being 
> moved across DataNodes, right? I am curious about the "namenode = []" from 
> the above output.
> [schu@styx01 ~]$ hadoop version
> Hadoop 0.24.0-SNAPSHOT
> Subversion 
> git://styx01.sf.cloudera.com/home/schu/hadoop-common/hadoop-common-project/hadoop-common
>  -r f6a577d697bbcd04ffbc568167c97b79479ff319
> Compiled by schu on Thu Mar  8 15:32:50 PST 2012
> From source with checksum ec971a6e7316f7fbf471b617905856b8
> From 
> http://hadoop.apache.org/hdfs/docs/r0.21.0/api/org/apache/hadoop/hdfs/server/balancer/Balancer.html:
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%. The threshold sets a target for whether the cluster is 
> balanced. A cluster is balanced if for each datanode, the utilization of the 
> node (ratio of used space at the node to total capacity of the node) differs 
> from the utilization of the (ratio of used space in the cluster to total 
> capacity of the cluster) by no more than the threshold value. The smaller the 
> threshold, the more balanced a cluster will become. It takes more time to run 
> the balancer for small threshold values. Also for a very small threshold the 
> cluster may not be able to reach the balanced state when applications write 
> and delete files concurrently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3070) hdfs balancer doesn't balance blocks between datanodes

2012-03-31 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243059#comment-13243059
 ] 

Uma Maheswara Rao G commented on HDFS-3070:
---

Aaron, You are right. We have seen this yesterday and realized it. :-)
Before Federation we might not have the requirement of loading properties from 
hdfs-site.xml in balancer, some might have proceeded with default values set in 
the code. Becaus eConfiguration can load core-site.xml files.

I agree with the fix that creating the HdfsConfiguration class and passing.

To catch this bug in tests itself, I would suggest to call the runBalancerCLI( 
expose new API from Balancer with package scope) and make the run method 
private.
{code}
static int runBalancerCLI(String[] args) throws Exception {
return ToolRunner.run(null, new Cli(), args); //Here you have to fix
  }
{code}

let main method and all tests call this function.


output from tests :

{quote}
2012-03-31 12:19:47,340 INFO  balancer.Balancer (Balancer.java:parse(1508)) - 
Using a threshold of 10.0
2012-03-31 12:19:47,340 INFO  balancer.Balancer (Balancer.java:run(1387)) - 
namenodes = []
2012-03-31 12:19:47,340 INFO  balancer.Balancer (Balancer.java:run(1388)) - p   
  = Balancer.Parameters[BalancingPolicy.Node, threshold=10.0]
Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
Bytes Being Moved
Balancing took 1.0 milliseconds
2012-03-31 12:19:47,341 INFO  balancer.Balancer 
(TestBalancerWithMultipleNameNodes.java:runBalancer(164)) - BALANCER 2
2012-03-31 12:19:47,341 INFO  balancer.Balancer 
(TestBalancerWithMultipleNameNodes.java:wait(132)) - WAIT 
expectedUsedSpace=350, expectedTotalSpace=1000
2012-03-31 12:19:47,341 INFO  balancer.Balancer 
(TestBalancerWithMultipleNameNodes.java:runBalancer(166)) - BALANCER 3
2012-03-31 12:19:47,342 WARN  balancer.Balancer 
(TestBalancerWithMultipleNameNodes.java:runBalancer(183)) - datanodes[0]: 
getDfsUsed()=60, getCapacity()=500
2012-03-31 12:19:47,343 WARN  balancer.Balancer 
(TestBalancerWithMultipleNameNodes.java:runBalancer(183)) - datanodes[1]: 
getDfsUsed()=290, getCapacity()=500
2012-03-31 12:19:47,344 WARN  balancer.Balancer 
(TestBalancerWithMultipleNameNodes.java:runBalancer(200)) - datanodes 1 is not 
yet balanced: used=290, cap=500, avg=35.0
{quote}

Remove HdfsConfiguration object creation from Balancer Tests. 

> hdfs balancer doesn't balance blocks between datanodes
> --
>
> Key: HDFS-3070
> URL: https://issues.apache.org/jira/browse/HDFS-3070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 2.0.0
>Reporter: Stephen Chu
>Assignee: Aaron T. Myers
> Attachments: HDFS-3070.patch, unbalanced_nodes.png, 
> unbalanced_nodes_inservice.png
>
>
> I TeraGenerated data into DataNodes styx01 and styx02. Looking at the web UI, 
> both have over 3% disk usage.
> Attached is a screenshot of the Live Nodes web UI.
> On styx01, I run the _hdfs balancer_ command with threshold 1% and don't see 
> the blocks being balanced across all 4 datanodes (all blocks on styx01 and 
> styx02 stay put).
> HA is currently enabled.
> [schu@styx01 ~]$ hdfs haadmin -getServiceState nn1
> active
> [schu@styx01 ~]$ hdfs balancer -threshold 1
> 12/03/08 10:10:32 INFO balancer.Balancer: Using a threshold of 1.0
> 12/03/08 10:10:32 INFO balancer.Balancer: namenodes = []
> 12/03/08 10:10:32 INFO balancer.Balancer: p = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=1.0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> Balancing took 95.0 milliseconds
> [schu@styx01 ~]$ 
> I believe with a threshold of 1% the balancer should trigger blocks being 
> moved across DataNodes, right? I am curious about the "namenode = []" from 
> the above output.
> [schu@styx01 ~]$ hadoop version
> Hadoop 0.24.0-SNAPSHOT
> Subversion 
> git://styx01.sf.cloudera.com/home/schu/hadoop-common/hadoop-common-project/hadoop-common
>  -r f6a577d697bbcd04ffbc568167c97b79479ff319
> Compiled by schu on Thu Mar  8 15:32:50 PST 2012
> From source with checksum ec971a6e7316f7fbf471b617905856b8
> From 
> http://hadoop.apache.org/hdfs/docs/r0.21.0/api/org/apache/hadoop/hdfs/server/balancer/Balancer.html:
> The threshold parameter is a fraction in the range of (0%, 100%) with a 
> default value of 10%. The threshold sets a target for whether the cluster is 
> balanced. A cluster is balanced if for each datanode, the utilization of the 
> node (ratio of used space at the node to total capacity of the node) differs 
> from the utilization of the (ratio of used space in the cluster to total 
> capacity of the cluster) by no more than the threshold value. The smaller the 
> threshold, the more balanced a cluster will