[jira] [Commented] (HDFS-2564) Cleanup unnecessary exceptions thrown and unnecessary casts

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153472#comment-13153472
 ] 

Hudson commented on HDFS-2564:
--

Integrated in Hadoop-Hdfs-trunk #868 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/868/])
HDFS-2564. Cleanup unnecessary exceptions thrown and unnecessary casts. 
Contributed by Hari Mankude

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1203950
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java


 Cleanup unnecessary exceptions thrown and unnecessary casts
 ---

 Key: HDFS-2564
 URL: https://issues.apache.org/jira/browse/HDFS-2564
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node, hdfs client, name-node
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-2564-1.txt, hadoop-2564.trunk.patch, 
 hadoop-2564.trunk.patch, hadoop-2564.trunk.patch


 Cleaning up some of the java files with unnecessary exceptions and casts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2563) Some cleanup in BPOfferService

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153471#comment-13153471
 ] 

Hudson commented on HDFS-2563:
--

Integrated in Hadoop-Hdfs-trunk #868 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/868/])
HDFS-2563. Some cleanup in BPOfferService. Contributed by Todd Lipcon.

todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1203943
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeRegister.java


 Some cleanup in BPOfferService
 --

 Key: HDFS-2563
 URL: https://issues.apache.org/jira/browse/HDFS-2563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0, 0.23.1
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.24.0, 0.23.1

 Attachments: hdfs-2563.txt, hdfs-2563.txt


 BPOfferService is currently rather difficult to follow and not really 
 commented. This JIRA is to clean up the code a bit, add javadocs/comments 
 where necessary, and improve the formatting of the log messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2563) Some cleanup in BPOfferService

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153474#comment-13153474
 ] 

Hudson commented on HDFS-2563:
--

Integrated in Hadoop-Hdfs-0.23-Build #81 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/81/])
HDFS-2563. Some cleanup in BPOfferService. Contributed by Todd Lipcon.

todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1203942
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeRegister.java


 Some cleanup in BPOfferService
 --

 Key: HDFS-2563
 URL: https://issues.apache.org/jira/browse/HDFS-2563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0, 0.23.1
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.24.0, 0.23.1

 Attachments: hdfs-2563.txt, hdfs-2563.txt


 BPOfferService is currently rather difficult to follow and not really 
 commented. This JIRA is to clean up the code a bit, add javadocs/comments 
 where necessary, and improve the formatting of the log messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2563) Some cleanup in BPOfferService

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153483#comment-13153483
 ] 

Hudson commented on HDFS-2563:
--

Integrated in Hadoop-Mapreduce-0.23-Build #98 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/98/])
HDFS-2563. Some cleanup in BPOfferService. Contributed by Todd Lipcon.

todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1203942
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeRegister.java


 Some cleanup in BPOfferService
 --

 Key: HDFS-2563
 URL: https://issues.apache.org/jira/browse/HDFS-2563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0, 0.23.1
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.24.0, 0.23.1

 Attachments: hdfs-2563.txt, hdfs-2563.txt


 BPOfferService is currently rather difficult to follow and not really 
 commented. This JIRA is to clean up the code a bit, add javadocs/comments 
 where necessary, and improve the formatting of the log messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2563) Some cleanup in BPOfferService

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153491#comment-13153491
 ] 

Hudson commented on HDFS-2563:
--

Integrated in Hadoop-Mapreduce-trunk #902 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/902/])
HDFS-2563. Some cleanup in BPOfferService. Contributed by Todd Lipcon.

todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1203943
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeRegister.java


 Some cleanup in BPOfferService
 --

 Key: HDFS-2563
 URL: https://issues.apache.org/jira/browse/HDFS-2563
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0, 0.23.1
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.24.0, 0.23.1

 Attachments: hdfs-2563.txt, hdfs-2563.txt


 BPOfferService is currently rather difficult to follow and not really 
 commented. This JIRA is to clean up the code a bit, add javadocs/comments 
 where necessary, and improve the formatting of the log messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2564) Cleanup unnecessary exceptions thrown and unnecessary casts

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153492#comment-13153492
 ] 

Hudson commented on HDFS-2564:
--

Integrated in Hadoop-Mapreduce-trunk #902 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/902/])
HDFS-2564. Cleanup unnecessary exceptions thrown and unnecessary casts. 
Contributed by Hari Mankude

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1203950
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java


 Cleanup unnecessary exceptions thrown and unnecessary casts
 ---

 Key: HDFS-2564
 URL: https://issues.apache.org/jira/browse/HDFS-2564
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node, hdfs client, name-node
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-2564-1.txt, hadoop-2564.trunk.patch, 
 hadoop-2564.trunk.patch, hadoop-2564.trunk.patch


 Cleaning up some of the java files with unnecessary exceptions and casts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1574) HDFS cannot be browsed from web UI while in safe mode

2011-11-19 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153499#comment-13153499
 ] 

Harsh J commented on HDFS-1574:
---

Via HDFS-2567, the file browser is also inaccessible when there are no 
DataNodes available.

 HDFS cannot be browsed from web UI while in safe mode
 -

 Key: HDFS-1574
 URL: https://issues.apache.org/jira/browse/HDFS-1574
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Priority: Blocker
  Labels: newbie

 As of HDFS-984, the NN does not issue delegation tokens while in safe mode 
 (since it would require writing to the edit log). But the browsedfscontent 
 servlet relies on getting a delegation token before redirecting to a random 
 DN to browse the FS. Thus, the browse the filesystem link does not work 
 while the NN is in safe mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2567) Can't browse HDFS on a fresh NN instance

2011-11-19 Thread Harsh J (Created) (JIRA)
Can't browse HDFS on a fresh NN instance


 Key: HDFS-2567
 URL: https://issues.apache.org/jira/browse/HDFS-2567
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J


Trace:

{code}
HTTP ERROR 500

Problem accessing /nn_browsedfscontent.jsp. Reason:

n must be positive
Caused by:

java.lang.IllegalArgumentException: n must be positive
at java.util.Random.nextInt(Random.java:250)
at 
org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:556)
at 
org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:524)
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.getRandomDatanode(NamenodeJspHelper.java:372)
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:383)
at 
org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:940)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
{code}

Steps I did to run into this:

1. Start a new NN, freshly formatted.
2. No DNs yet.
3. Visit the DFS browser link {{http://localhost:50070/nn_browsedfscontent.jsp}}
4. Above error shows itself
5. {{hdfs dfs -touchz afile}}
6. Re-visit, still shows the same issue.

Perhaps its cause of no added DN so far.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2567) Can't browse HDFS on a fresh NN instance

2011-11-19 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153500#comment-13153500
 ] 

Harsh J commented on HDFS-2567:
---

Yep, adding a DN solves this cause NN immediately redirects. We should show a 
proper error if this is meant to be.

 Can't browse HDFS on a fresh NN instance
 

 Key: HDFS-2567
 URL: https://issues.apache.org/jira/browse/HDFS-2567
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J

 Trace:
 {code}
 HTTP ERROR 500
 Problem accessing /nn_browsedfscontent.jsp. Reason:
 n must be positive
 Caused by:
 java.lang.IllegalArgumentException: n must be positive
   at java.util.Random.nextInt(Random.java:250)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:556)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:524)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.getRandomDatanode(NamenodeJspHelper.java:372)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:383)
   at 
 org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)
   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
   at 
 org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:940)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
   at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
   at org.mortbay.jetty.Server.handle(Server.java:326)
   at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
   at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
   at 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
   at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 {code}
 Steps I did to run into this:
 1. Start a new NN, freshly formatted.
 2. No DNs yet.
 3. Visit the DFS browser link 
 {{http://localhost:50070/nn_browsedfscontent.jsp}}
 4. Above error shows itself
 5. {{hdfs dfs -touchz afile}}
 6. Re-visit, still shows the same issue.
 Perhaps its cause of no added DN so far.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2567) Can't browse HDFS on a fresh NN instance

2011-11-19 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153502#comment-13153502
 ] 

Harsh J commented on HDFS-2567:
---

Not a regression, but we can still do with a better error message.

 Can't browse HDFS on a fresh NN instance
 

 Key: HDFS-2567
 URL: https://issues.apache.org/jira/browse/HDFS-2567
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J

 Trace:
 {code}
 HTTP ERROR 500
 Problem accessing /nn_browsedfscontent.jsp. Reason:
 n must be positive
 Caused by:
 java.lang.IllegalArgumentException: n must be positive
   at java.util.Random.nextInt(Random.java:250)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:556)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:524)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.getRandomDatanode(NamenodeJspHelper.java:372)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:383)
   at 
 org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)
   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
   at 
 org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:940)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
   at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
   at org.mortbay.jetty.Server.handle(Server.java:326)
   at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
   at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
   at 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
   at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 {code}
 Steps I did to run into this:
 1. Start a new NN, freshly formatted.
 2. No DNs yet.
 3. Visit the DFS browser link 
 {{http://localhost:50070/nn_browsedfscontent.jsp}}
 4. Above error shows itself
 5. {{hdfs dfs -touchz afile}}
 6. Re-visit, still shows the same issue.
 Perhaps its cause of no added DN so far.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2567) When 0 DNs are available, show a proper error when trying to browse DFS via web UI

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2567:
--

Summary: When 0 DNs are available, show a proper error when trying to 
browse DFS via web UI  (was: Can't browse HDFS on a fresh NN instance)

 When 0 DNs are available, show a proper error when trying to browse DFS via 
 web UI
 --

 Key: HDFS-2567
 URL: https://issues.apache.org/jira/browse/HDFS-2567
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J

 Trace:
 {code}
 HTTP ERROR 500
 Problem accessing /nn_browsedfscontent.jsp. Reason:
 n must be positive
 Caused by:
 java.lang.IllegalArgumentException: n must be positive
   at java.util.Random.nextInt(Random.java:250)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:556)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:524)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.getRandomDatanode(NamenodeJspHelper.java:372)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:383)
   at 
 org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)
   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
   at 
 org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:940)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
   at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
   at org.mortbay.jetty.Server.handle(Server.java:326)
   at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
   at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
   at 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
   at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 {code}
 Steps I did to run into this:
 1. Start a new NN, freshly formatted.
 2. No DNs yet.
 3. Visit the DFS browser link 
 {{http://localhost:50070/nn_browsedfscontent.jsp}}
 4. Above error shows itself
 5. {{hdfs dfs -touchz afile}}
 6. Re-visit, still shows the same issue.
 Perhaps its cause of no added DN so far.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2567) When 0 DNs are available, show a proper error when trying to browse DFS via web UI

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2567:
--

Fix Version/s: 0.24.0
   Status: Patch Available  (was: Open)

 When 0 DNs are available, show a proper error when trying to browse DFS via 
 web UI
 --

 Key: HDFS-2567
 URL: https://issues.apache.org/jira/browse/HDFS-2567
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
 Fix For: 0.24.0

 Attachments: HDFS-2567.patch


 Trace:
 {code}
 HTTP ERROR 500
 Problem accessing /nn_browsedfscontent.jsp. Reason:
 n must be positive
 Caused by:
 java.lang.IllegalArgumentException: n must be positive
   at java.util.Random.nextInt(Random.java:250)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:556)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:524)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.getRandomDatanode(NamenodeJspHelper.java:372)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:383)
   at 
 org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)
   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
   at 
 org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:940)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
   at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
   at org.mortbay.jetty.Server.handle(Server.java:326)
   at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
   at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
   at 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
   at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 {code}
 Steps I did to run into this:
 1. Start a new NN, freshly formatted.
 2. No DNs yet.
 3. Visit the DFS browser link 
 {{http://localhost:50070/nn_browsedfscontent.jsp}}
 4. Above error shows itself
 5. {{hdfs dfs -touchz afile}}
 6. Re-visit, still shows the same issue.
 Perhaps its cause of no added DN so far.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2567) When 0 DNs are available, show a proper error when trying to browse DFS via web UI

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2567:
--

Attachment: HDFS-2567.patch

New message:

{code}
HTTP ERROR 500

Problem accessing /nn_browsedfscontent.jsp. Reason:

Can't browse the DFS since there are no live nodes available to redirect to.
Caused by:

java.io.IOException: Can't browse the DFS since there are no live nodes 
available to redirect to.
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:388)
at 
org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:988)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
{code}

Manually tested on a 0.24-snapshot instance.

 When 0 DNs are available, show a proper error when trying to browse DFS via 
 web UI
 --

 Key: HDFS-2567
 URL: https://issues.apache.org/jira/browse/HDFS-2567
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
 Fix For: 0.24.0

 Attachments: HDFS-2567.patch


 Trace:
 {code}
 HTTP ERROR 500
 Problem accessing /nn_browsedfscontent.jsp. Reason:
 n must be positive
 Caused by:
 java.lang.IllegalArgumentException: n must be positive
   at java.util.Random.nextInt(Random.java:250)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:556)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:524)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.getRandomDatanode(NamenodeJspHelper.java:372)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:383)
   at 
 org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)
   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
   at 
 org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:940)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
   at 
 

[jira] [Commented] (HDFS-2536) FSImageTransactionalStorageInspector has a bunch of unused imports

2011-11-19 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153545#comment-13153545
 ] 

Harsh J commented on HDFS-2536:
---

Eclipse lets you remove all unused imports in one shot. Good enough to be 
accepted as a big patch?

 FSImageTransactionalStorageInspector has a bunch of unused imports
 --

 Key: HDFS-2536
 URL: https://issues.apache.org/jira/browse/HDFS-2536
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Priority: Trivial
  Labels: newbie

 Looks like it has 11 unused imports by my count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2567) When 0 DNs are available, show a proper error when trying to browse DFS via web UI

2011-11-19 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153544#comment-13153544
 ] 

Hadoop QA commented on HDFS-2567:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12504373/HDFS-2567.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.TestDfsOverAvroRpc

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1577//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1577//console

This message is automatically generated.

 When 0 DNs are available, show a proper error when trying to browse DFS via 
 web UI
 --

 Key: HDFS-2567
 URL: https://issues.apache.org/jira/browse/HDFS-2567
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
 Fix For: 0.24.0

 Attachments: HDFS-2567.patch


 Trace:
 {code}
 HTTP ERROR 500
 Problem accessing /nn_browsedfscontent.jsp. Reason:
 n must be positive
 Caused by:
 java.lang.IllegalArgumentException: n must be positive
   at java.util.Random.nextInt(Random.java:250)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:556)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:524)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.getRandomDatanode(NamenodeJspHelper.java:372)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:383)
   at 
 org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)
   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
   at 
 org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:940)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
   at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
   at org.mortbay.jetty.Server.handle(Server.java:326)
   at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
   at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
   at 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
   at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 {code}
 Steps I did to run into this:
 1. Start a new NN, freshly formatted.
 2. No DNs yet.
 3. Visit the DFS browser link 
 {{http://localhost:50070/nn_browsedfscontent.jsp}}
 4. Above error shows itself
 5. {{hdfs 

[jira] [Updated] (HDFS-2536) FSImageTransactionalStorageInspector has a bunch of unused imports

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2536:
--

Attachment: HDFS-2536.patch

Patch that cleans up the 'hadoop-hdfs' projects off all unused imports.

Note: I've not reorganized imports, just cleaned up unused ones and blank lines 
in between.

 FSImageTransactionalStorageInspector has a bunch of unused imports
 --

 Key: HDFS-2536
 URL: https://issues.apache.org/jira/browse/HDFS-2536
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Priority: Trivial
  Labels: newbie
 Attachments: HDFS-2536.patch


 Looks like it has 11 unused imports by my count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2536) FSImageTransactionalStorageInspector has a bunch of unused imports

2011-11-19 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153550#comment-13153550
 ] 

Harsh J commented on HDFS-2536:
---

Running {{mvn clean package -DskipTests}} passes compilation at least. (Had to 
clean, cause otherwise there were issues)

 FSImageTransactionalStorageInspector has a bunch of unused imports
 --

 Key: HDFS-2536
 URL: https://issues.apache.org/jira/browse/HDFS-2536
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Priority: Trivial
  Labels: newbie
 Attachments: HDFS-2536.patch


 Looks like it has 11 unused imports by my count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2536) FSImageTransactionalStorageInspector has a bunch of unused imports

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2536:
--

Fix Version/s: 0.24.0
 Assignee: Harsh J
   Status: Patch Available  (was: Open)

 FSImageTransactionalStorageInspector has a bunch of unused imports
 --

 Key: HDFS-2536
 URL: https://issues.apache.org/jira/browse/HDFS-2536
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Harsh J
Priority: Trivial
  Labels: newbie
 Fix For: 0.24.0

 Attachments: HDFS-2536.patch


 Looks like it has 11 unused imports by my count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-533) Fix Eclipse template

2011-11-19 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HDFS-533.
--

Resolution: Not A Problem

Not a problem after mavenization, in 0.23+.

Not a problem in 0.22 either, as mentioned by Nicholas.

 Fix Eclipse template
 

 Key: HDFS-533
 URL: https://issues.apache.org/jira/browse/HDFS-533
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 0.21.0
Reporter: Carlos Valiente
Priority: Trivial
 Attachments: HDFS-533.patch


 The entry for the AspectJ runtime does not match the version downloaded by Ivy

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2536) FSImageTransactionalStorageInspector has a bunch of unused imports

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2536:
--

Attachment: HDFS-2536.FSImageTransactionalStorageInspector.patch

Or alternative patch, just for the bad mentioned file.

 FSImageTransactionalStorageInspector has a bunch of unused imports
 --

 Key: HDFS-2536
 URL: https://issues.apache.org/jira/browse/HDFS-2536
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Harsh J
Priority: Trivial
  Labels: newbie
 Fix For: 0.24.0

 Attachments: HDFS-2536.FSImageTransactionalStorageInspector.patch, 
 HDFS-2536.patch


 Looks like it has 11 unused imports by my count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Harsh J (Created) (JIRA)
Use a set to manage child sockets in XceiverServer
--

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Priority: Trivial
 Fix For: 0.24.0


Found while reading up for HDFS-2454, currently we maintain childSockets in a 
DataXceiverServer as a MapSocket,Socket. This can very well be a SetSocket 
data structure -- since the goal is easy removals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2568:
--

Attachment: HDFS-2568.patch

 Use a set to manage child sockets in XceiverServer
 --

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Priority: Trivial
 Fix For: 0.24.0

 Attachments: HDFS-2568.patch


 Found while reading up for HDFS-2454, currently we maintain childSockets in a 
 DataXceiverServer as a MapSocket,Socket. This can very well be a 
 SetSocket data structure -- since the goal is easy removals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2568:
--

Assignee: Harsh J
  Status: Patch Available  (was: Open)

 Use a set to manage child sockets in XceiverServer
 --

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.24.0

 Attachments: HDFS-2568.patch


 Found while reading up for HDFS-2454, currently we maintain childSockets in a 
 DataXceiverServer as a MapSocket,Socket. This can very well be a 
 SetSocket data structure -- since the goal is easy removals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2568:
--

Status: Open  (was: Patch Available)

Some unnecessary changes got through.

 Use a set to manage child sockets in XceiverServer
 --

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.24.0

 Attachments: HDFS-2568.patch


 Found while reading up for HDFS-2454, currently we maintain childSockets in a 
 DataXceiverServer as a MapSocket,Socket. This can very well be a 
 SetSocket data structure -- since the goal is easy removals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2568:
--

Status: Patch Available  (was: Open)

 Use a set to manage child sockets in XceiverServer
 --

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.24.0

 Attachments: HDFS-2568.patch, HDFS-2568.patch


 Found while reading up for HDFS-2454, currently we maintain childSockets in a 
 DataXceiverServer as a MapSocket,Socket. This can very well be a 
 SetSocket data structure -- since the goal is easy removals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2568:
--

Attachment: HDFS-2568.patch

Proper patch. Existing tests should cover the change. We still use a 
synchronized set.

 Use a set to manage child sockets in XceiverServer
 --

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.24.0

 Attachments: HDFS-2568.patch, HDFS-2568.patch


 Found while reading up for HDFS-2454, currently we maintain childSockets in a 
 DataXceiverServer as a MapSocket,Socket. This can very well be a 
 SetSocket data structure -- since the goal is easy removals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2454) Move maxXceiverCount check to before starting the thread in dataXceiver

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2454:
--

Attachment: HDFS-2454.patch

 Move maxXceiverCount check to before starting the thread in dataXceiver
 ---

 Key: HDFS-2454
 URL: https://issues.apache.org/jira/browse/HDFS-2454
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
Priority: Minor
 Attachments: HDFS-2454.patch


// Make sure the xceiver count is not exceeded
 int curXceiverCount = datanode.getXceiverCount();
 if (curXceiverCount  dataXceiverServer.maxXceiverCount) {
   throw new IOException(xceiverCount  + curXceiverCount
 +  exceeds the limit of concurrent xcievers 
 + dataXceiverServer.maxXceiverCount);
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2454) Move maxXceiverCount check to before starting the thread in dataXceiver

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2454:
--

Assignee: Harsh J  (was: Uma Maheswara Rao G)
  Status: Patch Available  (was: Open)

 Move maxXceiverCount check to before starting the thread in dataXceiver
 ---

 Key: HDFS-2454
 URL: https://issues.apache.org/jira/browse/HDFS-2454
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Reporter: Uma Maheswara Rao G
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-2454.patch


// Make sure the xceiver count is not exceeded
 int curXceiverCount = datanode.getXceiverCount();
 if (curXceiverCount  dataXceiverServer.maxXceiverCount) {
   throw new IOException(xceiverCount  + curXceiverCount
 +  exceeds the limit of concurrent xcievers 
 + dataXceiverServer.maxXceiverCount);
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1803) Display progress as FSimage is being loaded

2011-11-19 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153562#comment-13153562
 ] 

Harsh J commented on HDFS-1803:
---

The FSImage loader is so chunky that its difficult to place this in a common 
area. Perhaps we could display what stage we are in, instead? (Inodes, DNs, 
Under Constructions, etc.)? Or the same as percentages instead.

 Display progress as FSimage is being loaded
 ---

 Key: HDFS-1803
 URL: https://issues.apache.org/jira/browse/HDFS-1803
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Dmytro Molkov
Assignee: Dmytro Molkov
Priority: Trivial

 In a large cluster the image takes quite a while to load. Right now there is 
 no indication of what the progress is.
 I propose a small patch that would simply print a message every time one more 
 percent of the image is loaded saying Loaded x% of the image

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2536) FSImageTransactionalStorageInspector has a bunch of unused imports

2011-11-19 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153569#comment-13153569
 ] 

Hadoop QA commented on HDFS-2536:
-

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12504385/HDFS-2536.FSImageTransactionalStorageInspector.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.TestDfsOverAvroRpc

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1578//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1578//console

This message is automatically generated.

 FSImageTransactionalStorageInspector has a bunch of unused imports
 --

 Key: HDFS-2536
 URL: https://issues.apache.org/jira/browse/HDFS-2536
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Harsh J
Priority: Trivial
  Labels: newbie
 Fix For: 0.24.0

 Attachments: HDFS-2536.FSImageTransactionalStorageInspector.patch, 
 HDFS-2536.patch


 Looks like it has 11 unused imports by my count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-79) Uncaught Exception in DataTransfer.run

2011-11-19 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-79?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153570#comment-13153570
 ] 

Harsh J commented on HDFS-79:
-

This would be a RuntimeException - IllegalStateException derivative. The docs 
say that this exception is unchecked.

I wonder if we see it anymore though?

We could intercept it, but not sure if the behavior is alright this way already.

 Uncaught Exception in DataTransfer.run
 --

 Key: HDFS-79
 URL: https://issues.apache.org/jira/browse/HDFS-79
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: 17.0 + H1979-H2159-H3442
Reporter: Koji Noguchi
Priority: Minor

 Minor, but it would be nice if this exception is caught and logged.
 I see in .out file of datanode, 
 {noformat}
 Exception in thread org.apache.hadoop.dfs.DataNode$DataTransfer@9d2805 
 java.nio.channels.ClosedSelectorException
 at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:66)
 at sun.nio.ch.SelectorImpl.selectNow(SelectorImpl.java:88)
 at sun.nio.ch.Util.releaseTemporarySelector(Util.java:135)
 at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:118)
 at org.apache.hadoop.dfs.DataNode$DataTransfer.run(DataNode.java:2604)
 at java.lang.Thread.run(Thread.java:619)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-300) NameNode to blat total number of files and blocks

2011-11-19 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HDFS-300.
--

Resolution: Not A Problem

Not a problem anymore (perhaps since branch-0.20-security pick up).

From a sample /jmx metrics output:

{code}
{
name : hadoop:service=NameNode,name=FSNamesystemState,
modelerType : org.apache.hadoop.hdfs.server.namenode.FSNamesystem,
CapacityTotal : 999527776256,
CapacityUsed : 413696,
CapacityRemaining : 732080799744,
TotalLoad : 1,
BlocksTotal : 29,
FilesTotal : 70,
PendingReplicationBlocks : 0,
UnderReplicatedBlocks : 0,
ScheduledReplicationBlocks : 0,
FSState : safeMode
  }
{code}

*BlocksTotal : 29*
*FilesTotal : 70*

 NameNode to blat total number of files and blocks
 -

 Key: HDFS-300
 URL: https://issues.apache.org/jira/browse/HDFS-300
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Marco Nicosia
Priority: Minor

 Right now, the namenode reports lots of rates (block read per sec, removed 
 per sec, etc etc) but it doesn't actually report how many files and blocks 
 total exist in the system. It'd be great if we could have this, so that our 
 reporting systems can show the growth trends over time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-336) dfsadmin -report should report number of blocks from datanode

2011-11-19 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153573#comment-13153573
 ] 

Harsh J commented on HDFS-336:
--

For anyone who'd like to attempt this:

This can be done by promoting the numBlocks method and field, from the 
DatanodeDescriptor class to its superclass DatanodeInfo (which is what is 
exposed out, for report and such).

Then on, its as easy as printing it as part of the report, from within 
DatanodeInfo.

 dfsadmin -report should report number of blocks from datanode
 -

 Key: HDFS-336
 URL: https://issues.apache.org/jira/browse/HDFS-336
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Lohit Vijayarenu
Priority: Minor
  Labels: newbie

 _hadoop dfsadmin -report_ seems to miss number of blocks from a datanode. 
 Number of blocks hosted by a datanode is a good info which should be included 
 in the report. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-336) dfsadmin -report should report number of blocks from datanode

2011-11-19 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153574#comment-13153574
 ] 

Harsh J commented on HDFS-336:
--

Btw, we expose the number of blocks via metrics today. This is mostly just good 
for admins wanting to see CLI outputs, but otherwise one can safely rely on the 
exposed metrics.

Unsure if its worth the addition, but surely a good to have.

 dfsadmin -report should report number of blocks from datanode
 -

 Key: HDFS-336
 URL: https://issues.apache.org/jira/browse/HDFS-336
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Lohit Vijayarenu
Priority: Minor
  Labels: newbie

 _hadoop dfsadmin -report_ seems to miss number of blocks from a datanode. 
 Number of blocks hosted by a datanode is a good info which should be included 
 in the report. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-263) fsck -files -blocks -locations is a little slow

2011-11-19 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HDFS-263.
--

Resolution: Cannot Reproduce

Doesn't look like there's a problem today:

1339947 files and directories, 1345767 blocks:

{code}
[harsh@01 ~]$ time sudo -u hdfs hadoop fsck /  /dev/null

real0m25.126s
user0m1.972s
sys 0m0.268s
[harsh@01 ~]$ time sudo -u hdfs hadoop fsck / -files  /dev/null

real0m26.307s
user0m4.376s
sys 0m0.602s
[harsh@01 ~]$ time sudo -u hdfs hadoop fsck / -files -blocks -locations  /
dev/null

real0m27.712s
user0m6.900s
sys 0m1.126s
{code}

 fsck -files -blocks -locations is a little slow
 ---

 Key: HDFS-263
 URL: https://issues.apache.org/jira/browse/HDFS-263
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Koji Noguchi
Assignee: Lohit Vijayarenu
Priority: Minor

 fsck on one subdirectory. 
 about 50,000 files (50,000 blocks) 
 fsck  /user/aaa3 seconds
 fsck /user/aaa -files   30 seconds
 fsck /user/aaa -files -blocks -locations 90 seconds. 
 It depends on the network, but could it be a little faster?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-442) dfsthroughput in test.jar throws NPE

2011-11-19 Thread Harsh J (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J reassigned HDFS-442:


Assignee: Harsh J

 dfsthroughput in test.jar throws NPE
 

 Key: HDFS-442
 URL: https://issues.apache.org/jira/browse/HDFS-442
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.20.1
Reporter: Ramya Sunil
Assignee: Harsh J
Priority: Minor

 On running hadoop jar hadoop-test.jar dfsthroughput OR hadoop 
 org.apache.hadoop.hdfs.BenchmarkThroughput, we get NullPointerException. 
 Below is the stacktrace:
 {noformat}
 Exception in thread main java.lang.NullPointerException
 at java.util.Hashtable.put(Hashtable.java:394)
 at java.util.Properties.setProperty(Properties.java:143)
 at java.lang.System.setProperty(System.java:731)
 at 
 org.apache.hadoop.hdfs.BenchmarkThroughput.run(BenchmarkThroughput.java:198)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at 
 org.apache.hadoop.hdfs.BenchmarkThroughput.main(BenchmarkThroughput.java:229)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-442) dfsthroughput in test.jar throws NPE

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-442:
-

Fix Version/s: 0.24.0
   Status: Patch Available  (was: Open)

 dfsthroughput in test.jar throws NPE
 

 Key: HDFS-442
 URL: https://issues.apache.org/jira/browse/HDFS-442
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.20.1
Reporter: Ramya Sunil
Assignee: Harsh J
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-442.patch


 On running hadoop jar hadoop-test.jar dfsthroughput OR hadoop 
 org.apache.hadoop.hdfs.BenchmarkThroughput, we get NullPointerException. 
 Below is the stacktrace:
 {noformat}
 Exception in thread main java.lang.NullPointerException
 at java.util.Hashtable.put(Hashtable.java:394)
 at java.util.Properties.setProperty(Properties.java:143)
 at java.lang.System.setProperty(System.java:731)
 at 
 org.apache.hadoop.hdfs.BenchmarkThroughput.run(BenchmarkThroughput.java:198)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at 
 org.apache.hadoop.hdfs.BenchmarkThroughput.main(BenchmarkThroughput.java:229)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-69) Improve dfsadmin command line help

2011-11-19 Thread Harsh J (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-69?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J reassigned HDFS-69:
---

Assignee: Harsh J

 Improve dfsadmin command line help 
 ---

 Key: HDFS-69
 URL: https://issues.apache.org/jira/browse/HDFS-69
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ravi Phulari
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-69.patch


 Enhance dfsadmin command line help informing A quota of one forces a 
 directory to remain empty 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-69) Improve dfsadmin command line help

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-69?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-69:


Attachment: HDFS-69.patch

Ravi and Jakob's comments are both addressed in this trunk docfix patch.

No tests should be required as these are merely print output / documentation 
changes. Nothing incompatible either.

 Improve dfsadmin command line help 
 ---

 Key: HDFS-69
 URL: https://issues.apache.org/jira/browse/HDFS-69
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ravi Phulari
Priority: Minor
 Attachments: HDFS-69.patch


 Enhance dfsadmin command line help informing A quota of one forces a 
 directory to remain empty 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-6) in FSNamesystem.registerDatanode, dnAddress should be resolved (rarely occured)

2011-11-19 Thread Harsh J (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HDFS-6.


Resolution: Won't Fix

This is mostly caused by improper host resolution environments (I see this 
cause disallowed exceptions a lot of time on Cloudera's scm-users list, for 
example, where the users have badly formed /etc/hosts files [FQDN second 
instead of first, etc.]).

The most easy fix is to fix your environment to have saner resolution that does 
not lead to this, than having the code resolve it.

I do not see this problem with consistent host resolution setups.

Resolving as Won't Fix, but do reopen if there's a strong point of argument 
here, of why the workaround _has_ to be necessarily done. Patches would be 
welcome in that case too :)

 in FSNamesystem.registerDatanode, dnAddress should be resolved (rarely 
 occured)
 ---

 Key: HDFS-6
 URL: https://issues.apache.org/jira/browse/HDFS-6
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: CentOS 5.2, JDK 1.6
Reporter: Wang Xu
Priority: Minor
   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 In FSNamesystem.java registerDatanode(), if the datanode address cannot be
 got from the RPC Server, it will use that from the datanode report:
 String dnAddress = Server.getRemoteAddress();
 if (dnAddress == null) {
   // Mostly called inside an RPC.
   // But if not, use address passed by the data-node.
   dnAddress = nodeReg.getHost();
 }  
 The getHost() may return the hostname or address, while the 
 Server.getRemoteAddress() 
 will return the IP address, which is the dnAddress should be. Thus I think 
 the it should be
 if (dnAddress == null) {
   // Mostly called inside an RPC.
   // But if not, use address passed by the data-node.
   dnAddress = InetAddress.getByName(nodeReg.getHost()).getHostAddress();
 }  
 I know it should not be called in most situation, but I indeed use that, and 
 I suppose the 
 dnAddress should be an IP address.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-557) 0.20 HDFS documentation for dfsadmin is using bin/hadoop instead of bin/hdfs

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-557:
-

Attachment: HDFS-557.patch

Here are s/hadoop /hdfs /g updates for HDFS docs. All references I could manage 
to read and find.

 0.20 HDFS documentation for dfsadmin is using bin/hadoop instead of bin/hdfs
 

 Key: HDFS-557
 URL: https://issues.apache.org/jira/browse/HDFS-557
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Boris Shkolnik
Priority: Minor
 Attachments: HDFS-557.patch, file_system_shell.pdf, 
 file_system_shell_2.pdf, hdfs_user_guide.pdf


 forest documentation is using bin/hadoop for dfsadmin command help instead of 
 bin/hdfs

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-557) 0.20 HDFS documentation for dfsadmin is using bin/hadoop instead of bin/hdfs

2011-11-19 Thread Harsh J (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J reassigned HDFS-557:


Assignee: Harsh J

 0.20 HDFS documentation for dfsadmin is using bin/hadoop instead of bin/hdfs
 

 Key: HDFS-557
 URL: https://issues.apache.org/jira/browse/HDFS-557
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Boris Shkolnik
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-557.patch, file_system_shell.pdf, 
 file_system_shell_2.pdf, hdfs_user_guide.pdf


 forest documentation is using bin/hadoop for dfsadmin command help instead of 
 bin/hdfs

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2454) Move maxXceiverCount check to before starting the thread in dataXceiver

2011-11-19 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153593#comment-13153593
 ] 

Hadoop QA commented on HDFS-2454:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12504390/HDFS-2454.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.TestDfsOverAvroRpc

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1581//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1581//console

This message is automatically generated.

 Move maxXceiverCount check to before starting the thread in dataXceiver
 ---

 Key: HDFS-2454
 URL: https://issues.apache.org/jira/browse/HDFS-2454
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Reporter: Uma Maheswara Rao G
Assignee: Harsh J
Priority: Minor
 Attachments: HDFS-2454.patch


// Make sure the xceiver count is not exceeded
 int curXceiverCount = datanode.getXceiverCount();
 if (curXceiverCount  dataXceiverServer.maxXceiverCount) {
   throw new IOException(xceiverCount  + curXceiverCount
 +  exceeds the limit of concurrent xcievers 
 + dataXceiverServer.maxXceiverCount);
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2502) hdfs-default.xml should include dfs.name.dir.restore

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2502:
--

Attachment: HDFS-2502.patch

 hdfs-default.xml should include dfs.name.dir.restore
 

 Key: HDFS-2502
 URL: https://issues.apache.org/jira/browse/HDFS-2502
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.0
Reporter: Eli Collins
Priority: Minor
  Labels: noob
 Fix For: 0.24.0

 Attachments: HDFS-2502.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2502) hdfs-default.xml should include dfs.name.dir.restore

2011-11-19 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2502:
--

Fix Version/s: 0.24.0
 Assignee: Harsh J
   Status: Patch Available  (was: Open)

 hdfs-default.xml should include dfs.name.dir.restore
 

 Key: HDFS-2502
 URL: https://issues.apache.org/jira/browse/HDFS-2502
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.0
Reporter: Eli Collins
Assignee: Harsh J
Priority: Minor
  Labels: noob
 Fix For: 0.24.0

 Attachments: HDFS-2502.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153594#comment-13153594
 ] 

Hadoop QA commented on HDFS-2568:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12504388/HDFS-2568.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.TestDfsOverAvroRpc

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1580//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1580//console

This message is automatically generated.

 Use a set to manage child sockets in XceiverServer
 --

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.24.0

 Attachments: HDFS-2568.patch, HDFS-2568.patch


 Found while reading up for HDFS-2454, currently we maintain childSockets in a 
 DataXceiverServer as a MapSocket,Socket. This can very well be a 
 SetSocket data structure -- since the goal is easy removals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2502) hdfs-default.xml should include dfs.name.dir.restore

2011-11-19 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153608#comment-13153608
 ] 

Hadoop QA commented on HDFS-2502:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12504396/HDFS-2502.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+0 tests included.  The patch appears to be a documentation patch that 
doesn't require tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.TestDfsOverAvroRpc

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1583//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1583//console

This message is automatically generated.

 hdfs-default.xml should include dfs.name.dir.restore
 

 Key: HDFS-2502
 URL: https://issues.apache.org/jira/browse/HDFS-2502
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.0
Reporter: Eli Collins
Assignee: Harsh J
Priority: Minor
  Labels: noob
 Fix For: 0.24.0

 Attachments: HDFS-2502.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-442) dfsthroughput in test.jar throws NPE

2011-11-19 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153610#comment-13153610
 ] 

Hadoop QA commented on HDFS-442:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12504392/HDFS-442.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.TestDfsOverAvroRpc

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1582//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1582//console

This message is automatically generated.

 dfsthroughput in test.jar throws NPE
 

 Key: HDFS-442
 URL: https://issues.apache.org/jira/browse/HDFS-442
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.20.1
Reporter: Ramya Sunil
Assignee: Harsh J
Priority: Minor
 Fix For: 0.24.0

 Attachments: HDFS-442.patch


 On running hadoop jar hadoop-test.jar dfsthroughput OR hadoop 
 org.apache.hadoop.hdfs.BenchmarkThroughput, we get NullPointerException. 
 Below is the stacktrace:
 {noformat}
 Exception in thread main java.lang.NullPointerException
 at java.util.Hashtable.put(Hashtable.java:394)
 at java.util.Properties.setProperty(Properties.java:143)
 at java.lang.System.setProperty(System.java:731)
 at 
 org.apache.hadoop.hdfs.BenchmarkThroughput.run(BenchmarkThroughput.java:198)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at 
 org.apache.hadoop.hdfs.BenchmarkThroughput.main(BenchmarkThroughput.java:229)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2566) Move BPOfferService to be a non-inner class

2011-11-19 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-2566:
--

Attachment: hdfs-2566.txt

On the above run, TestDistributedUpgrade timed out, but it seems to pass here 
reliably. Reuploading patch with a 2min timeout on this test so we'll get logs 
if it reproduces on Hudson

 Move BPOfferService to be a non-inner class
 ---

 Key: HDFS-2566
 URL: https://issues.apache.org/jira/browse/HDFS-2566
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.1
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hdfs-2566.txt, hdfs-2566.txt


 Rounding out the cleanup of BPOfferService, it would be good to move it to 
 its own file, so it's no longer an inner class. DataNode.java is really large 
 and hard to navigate. BPOfferService itself is ~700 lines, so seems like a 
 large enough unit to merit its own file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2246) Shortcut a local client reads to a Datanodes files directly

2011-11-19 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153623#comment-13153623
 ] 

Todd Lipcon commented on HDFS-2246:
---

I agree with Eli. What's the point in defining a policy around maintenance 
releases if we break it so quickly? I'm -1 on a patch going into 0.20 until 
there's one in trunk, and I think the initial code reviews should be happening 
on trunk, followed by a backport.

 Shortcut a local client reads to a Datanodes files directly
 ---

 Key: HDFS-2246
 URL: https://issues.apache.org/jira/browse/HDFS-2246
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Sanjay Radia
 Attachments: 0001-HDFS-347.-Local-reads.patch, 
 HDFS-2246-branch-0.20-security-205.1.patch, 
 HDFS-2246-branch-0.20-security-205.2.patch, 
 HDFS-2246-branch-0.20-security-205.patch, 
 HDFS-2246-branch-0.20-security-205.patch, 
 HDFS-2246-branch-0.20-security-205.patch, 
 HDFS-2246-branch-0.20-security.3.patch, 
 HDFS-2246-branch-0.20-security.no-softref.patch, 
 HDFS-2246-branch-0.20-security.patch, HDFS-2246-branch-0.20-security.patch, 
 HDFS-2246-branch-0.20-security.patch, HDFS-2246-trunk.patch, 
 HDFS-2246-trunk.patch, HDFS-2246.20s.1.patch, HDFS-2246.20s.2.txt, 
 HDFS-2246.20s.3.txt, HDFS-2246.20s.4.txt, HDFS-2246.20s.patch, 
 TestShortCircuitLocalRead.java, localReadShortcut20-security.2patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2566) Move BPOfferService to be a non-inner class

2011-11-19 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153625#comment-13153625
 ] 

Hadoop QA commented on HDFS-2566:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12504404/hdfs-2566.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 15 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1584//console

This message is automatically generated.

 Move BPOfferService to be a non-inner class
 ---

 Key: HDFS-2566
 URL: https://issues.apache.org/jira/browse/HDFS-2566
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.1
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hdfs-2566.txt, hdfs-2566.txt


 Rounding out the cleanup of BPOfferService, it would be good to move it to 
 its own file, so it's no longer an inner class. DataNode.java is really large 
 and hard to navigate. BPOfferService itself is ~700 lines, so seems like a 
 large enough unit to merit its own file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2566) Move BPOfferService to be a non-inner class

2011-11-19 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-2566:
--

Attachment: hdfs-2566.txt

new rev - this conflicted with HDFS-2564 in one place (trivial whitespace 
conflict)

 Move BPOfferService to be a non-inner class
 ---

 Key: HDFS-2566
 URL: https://issues.apache.org/jira/browse/HDFS-2566
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.1
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hdfs-2566.txt, hdfs-2566.txt, hdfs-2566.txt


 Rounding out the cleanup of BPOfferService, it would be good to move it to 
 its own file, so it's no longer an inner class. DataNode.java is really large 
 and hard to navigate. BPOfferService itself is ~700 lines, so seems like a 
 large enough unit to merit its own file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2566) Move BPOfferService to be a non-inner class

2011-11-19 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153648#comment-13153648
 ] 

Hadoop QA commented on HDFS-2566:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12504407/hdfs-2566.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 15 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.TestDfsOverAvroRpc
  
org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1585//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1585//console

This message is automatically generated.

 Move BPOfferService to be a non-inner class
 ---

 Key: HDFS-2566
 URL: https://issues.apache.org/jira/browse/HDFS-2566
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.1
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hdfs-2566.txt, hdfs-2566.txt, hdfs-2566.txt


 Rounding out the cleanup of BPOfferService, it would be good to move it to 
 its own file, so it's no longer an inner class. DataNode.java is really large 
 and hard to navigate. BPOfferService itself is ~700 lines, so seems like a 
 large enough unit to merit its own file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2541) For a sufficiently large value of blocks, the DN Scanner may request a random number with a negative seed value.

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153649#comment-13153649
 ] 

Hudson commented on HDFS-2541:
--

Integrated in Hadoop-Hdfs-trunk-Commit #1361 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1361/])
HDFS-2541. For a sufficiently large value of blocks, the DN Scanner may 
request a random number with a negative seed value. Contributed by Harsh J

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204114
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java


 For a sufficiently large value of blocks, the DN Scanner may request a random 
 number with a negative seed value.
 

 Key: HDFS-2541
 URL: https://issues.apache.org/jira/browse/HDFS-2541
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20.1
Reporter: Harsh J
Assignee: Harsh J
 Fix For: 0.20.206.0, 0.23.1

 Attachments: BSBugTest.java, HDFS-2541.patch


 Running off 0.20-security, I noticed that one could get the following 
 exception when scanners are used:
 {code}
 DataXceiver 
 java.lang.IllegalArgumentException: n must be positive 
 at java.util.Random.nextInt(Random.java:250) 
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNewBlockScanTime(DataBlockScanner.java:251)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:268)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:432)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:122)
 {code}
 This is cause the period, determined in the DataBlockScanner (0.20+) or 
 BlockPoolSliceScanner (0.23+), is cast to an integer before its sent to a 
 Random.nextInt(...) call. For sufficiently large values of the long 'period', 
 the casted integer may be negative. This is not accounted for. I'll attach a 
 sample test that shows this possibility with the numbers.
 We should ensure we do a Math.abs(...) before we send it to the 
 Random.nextInt(...) call to avoid this.
 With this bug, the maximum # of blocks a scanner may hold in its blocksMap 
 without opening up the chance for beginning this exception (intermittent, as 
 blocks continue to grow) would be 3582718.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2541) For a sufficiently large value of blocks, the DN Scanner may request a random number with a negative seed value.

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153652#comment-13153652
 ] 

Hudson commented on HDFS-2541:
--

Integrated in Hadoop-Common-trunk-Commit #1287 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1287/])
HDFS-2541. For a sufficiently large value of blocks, the DN Scanner may 
request a random number with a negative seed value. Contributed by Harsh J

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204114
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java


 For a sufficiently large value of blocks, the DN Scanner may request a random 
 number with a negative seed value.
 

 Key: HDFS-2541
 URL: https://issues.apache.org/jira/browse/HDFS-2541
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20.1
Reporter: Harsh J
Assignee: Harsh J
 Fix For: 0.20.206.0, 0.23.1

 Attachments: BSBugTest.java, HDFS-2541.patch


 Running off 0.20-security, I noticed that one could get the following 
 exception when scanners are used:
 {code}
 DataXceiver 
 java.lang.IllegalArgumentException: n must be positive 
 at java.util.Random.nextInt(Random.java:250) 
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNewBlockScanTime(DataBlockScanner.java:251)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:268)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:432)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:122)
 {code}
 This is cause the period, determined in the DataBlockScanner (0.20+) or 
 BlockPoolSliceScanner (0.23+), is cast to an integer before its sent to a 
 Random.nextInt(...) call. For sufficiently large values of the long 'period', 
 the casted integer may be negative. This is not accounted for. I'll attach a 
 sample test that shows this possibility with the numbers.
 We should ensure we do a Math.abs(...) before we send it to the 
 Random.nextInt(...) call to avoid this.
 With this bug, the maximum # of blocks a scanner may hold in its blocksMap 
 without opening up the chance for beginning this exception (intermittent, as 
 blocks continue to grow) would be 3582718.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2541) For a sufficiently large value of blocks, the DN Scanner may request a random number with a negative seed value.

2011-11-19 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2541:
--

   Resolution: Fixed
Fix Version/s: (was: 0.24.0)
   0.23.1
   0.20.206.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Test failures are unrelated. I committed this and merged to 23 and 206 since 
this is low risk and we've seen this bug in the wild. Thanks Harsh!

 For a sufficiently large value of blocks, the DN Scanner may request a random 
 number with a negative seed value.
 

 Key: HDFS-2541
 URL: https://issues.apache.org/jira/browse/HDFS-2541
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20.1
Reporter: Harsh J
Assignee: Harsh J
 Fix For: 0.20.206.0, 0.23.1

 Attachments: BSBugTest.java, HDFS-2541.patch


 Running off 0.20-security, I noticed that one could get the following 
 exception when scanners are used:
 {code}
 DataXceiver 
 java.lang.IllegalArgumentException: n must be positive 
 at java.util.Random.nextInt(Random.java:250) 
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNewBlockScanTime(DataBlockScanner.java:251)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:268)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:432)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:122)
 {code}
 This is cause the period, determined in the DataBlockScanner (0.20+) or 
 BlockPoolSliceScanner (0.23+), is cast to an integer before its sent to a 
 Random.nextInt(...) call. For sufficiently large values of the long 'period', 
 the casted integer may be negative. This is not accounted for. I'll attach a 
 sample test that shows this possibility with the numbers.
 We should ensure we do a Math.abs(...) before we send it to the 
 Random.nextInt(...) call to avoid this.
 With this bug, the maximum # of blocks a scanner may hold in its blocksMap 
 without opening up the chance for beginning this exception (intermittent, as 
 blocks continue to grow) would be 3582718.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2502) hdfs-default.xml should include dfs.name.dir.restore

2011-11-19 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2502:
--

   Resolution: Fixed
Fix Version/s: (was: 0.24.0)
   0.23.1
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this and merged to 23. Thanks Harsh!

 hdfs-default.xml should include dfs.name.dir.restore
 

 Key: HDFS-2502
 URL: https://issues.apache.org/jira/browse/HDFS-2502
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.0
Reporter: Eli Collins
Assignee: Harsh J
Priority: Minor
  Labels: noob
 Fix For: 0.23.1

 Attachments: HDFS-2502.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2502) hdfs-default.xml should include dfs.name.dir.restore

2011-11-19 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153657#comment-13153657
 ] 

Eli Collins commented on HDFS-2502:
---

Btw I removed the last bit of the comment about requiring a SNN as the NN will 
attempt storage restore even when run w/o one.

 hdfs-default.xml should include dfs.name.dir.restore
 

 Key: HDFS-2502
 URL: https://issues.apache.org/jira/browse/HDFS-2502
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.0
Reporter: Eli Collins
Assignee: Harsh J
Priority: Minor
  Labels: noob
 Fix For: 0.23.1

 Attachments: HDFS-2502.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2502) hdfs-default.xml should include dfs.name.dir.restore

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153659#comment-13153659
 ] 

Hudson commented on HDFS-2502:
--

Integrated in Hadoop-Hdfs-0.23-Commit #184 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/184/])
HDFS-2502. svn merge -c 1204117 from trunk

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204118
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/streaming
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/FileBench.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job


 hdfs-default.xml should include dfs.name.dir.restore
 

 Key: HDFS-2502
 URL: https://issues.apache.org/jira/browse/HDFS-2502
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.0
Reporter: Eli Collins
Assignee: Harsh J
Priority: Minor
  Labels: noob
 Fix For: 0.23.1

 Attachments: HDFS-2502.patch




--
This message is automatically generated by JIRA.
If you think it was 

[jira] [Commented] (HDFS-2502) hdfs-default.xml should include dfs.name.dir.restore

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153660#comment-13153660
 ] 

Hudson commented on HDFS-2502:
--

Integrated in Hadoop-Hdfs-trunk-Commit #1362 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1362/])
HDFS-2502. hdfs-default.xml should include dfs.name.dir.restore. 
Contributed by Harsh J

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204117
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 hdfs-default.xml should include dfs.name.dir.restore
 

 Key: HDFS-2502
 URL: https://issues.apache.org/jira/browse/HDFS-2502
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.0
Reporter: Eli Collins
Assignee: Harsh J
Priority: Minor
  Labels: noob
 Fix For: 0.23.1

 Attachments: HDFS-2502.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2502) hdfs-default.xml should include dfs.name.dir.restore

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153661#comment-13153661
 ] 

Hudson commented on HDFS-2502:
--

Integrated in Hadoop-Common-0.23-Commit #185 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/185/])
HDFS-2502. svn merge -c 1204117 from trunk

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204118
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/streaming
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/FileBench.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job


 hdfs-default.xml should include dfs.name.dir.restore
 

 Key: HDFS-2502
 URL: https://issues.apache.org/jira/browse/HDFS-2502
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.0
Reporter: Eli Collins
Assignee: Harsh J
Priority: Minor
  Labels: noob
 Fix For: 0.23.1

 Attachments: HDFS-2502.patch




--
This message is automatically generated by JIRA.
If you think it 

[jira] [Commented] (HDFS-2541) For a sufficiently large value of blocks, the DN Scanner may request a random number with a negative seed value.

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153662#comment-13153662
 ] 

Hudson commented on HDFS-2541:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1313 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1313/])
HDFS-2541. For a sufficiently large value of blocks, the DN Scanner may 
request a random number with a negative seed value. Contributed by Harsh J

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204114
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java


 For a sufficiently large value of blocks, the DN Scanner may request a random 
 number with a negative seed value.
 

 Key: HDFS-2541
 URL: https://issues.apache.org/jira/browse/HDFS-2541
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20.1
Reporter: Harsh J
Assignee: Harsh J
 Fix For: 0.20.206.0, 0.23.1

 Attachments: BSBugTest.java, HDFS-2541.patch


 Running off 0.20-security, I noticed that one could get the following 
 exception when scanners are used:
 {code}
 DataXceiver 
 java.lang.IllegalArgumentException: n must be positive 
 at java.util.Random.nextInt(Random.java:250) 
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNewBlockScanTime(DataBlockScanner.java:251)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:268)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:432)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:122)
 {code}
 This is cause the period, determined in the DataBlockScanner (0.20+) or 
 BlockPoolSliceScanner (0.23+), is cast to an integer before its sent to a 
 Random.nextInt(...) call. For sufficiently large values of the long 'period', 
 the casted integer may be negative. This is not accounted for. I'll attach a 
 sample test that shows this possibility with the numbers.
 We should ensure we do a Math.abs(...) before we send it to the 
 Random.nextInt(...) call to avoid this.
 With this bug, the maximum # of blocks a scanner may hold in its blocksMap 
 without opening up the chance for beginning this exception (intermittent, as 
 blocks continue to grow) would be 3582718.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2502) hdfs-default.xml should include dfs.name.dir.restore

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153664#comment-13153664
 ] 

Hudson commented on HDFS-2502:
--

Integrated in Hadoop-Common-trunk-Commit #1288 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1288/])
HDFS-2502. hdfs-default.xml should include dfs.name.dir.restore. 
Contributed by Harsh J

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204117
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 hdfs-default.xml should include dfs.name.dir.restore
 

 Key: HDFS-2502
 URL: https://issues.apache.org/jira/browse/HDFS-2502
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.0
Reporter: Eli Collins
Assignee: Harsh J
Priority: Minor
  Labels: noob
 Fix For: 0.23.1

 Attachments: HDFS-2502.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-208) name node should warn if only one dir is listed in dfs.name.dir

2011-11-19 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-208:
-

Fix Version/s: (was: 0.24.0)
   0.23.1
   Issue Type: Improvement  (was: New Feature)

+1  looks good

 name node should warn if only one dir is listed in dfs.name.dir
 ---

 Key: HDFS-208
 URL: https://issues.apache.org/jira/browse/HDFS-208
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Allen Wittenauer
Assignee: Uma Maheswara Rao G
Priority: Minor
  Labels: newbie
 Fix For: 0.23.1

 Attachments: HDFS-208.patch


 The name node should warn that corruption may occur if only one directory is 
 listed in the dfs.name.dir setting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2502) hdfs-default.xml should include dfs.name.dir.restore

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153665#comment-13153665
 ] 

Hudson commented on HDFS-2502:
--

Integrated in Hadoop-Mapreduce-0.23-Commit #197 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/197/])
HDFS-2502. svn merge -c 1204117 from trunk

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204118
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/streaming
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/FileBench.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job


 hdfs-default.xml should include dfs.name.dir.restore
 

 Key: HDFS-2502
 URL: https://issues.apache.org/jira/browse/HDFS-2502
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.0
Reporter: Eli Collins
Assignee: Harsh J
Priority: Minor
  Labels: noob
 Fix For: 0.23.1

 Attachments: HDFS-2502.patch




--
This message is automatically generated by JIRA.
If you think 

[jira] [Updated] (HDFS-208) name node should warn if only one dir is listed in dfs.name.dir

2011-11-19 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-208:
-

Attachment: hdfs-208.patch

Minor update that's a little more explicit (warns of data loss vs nn 
corruption).

 name node should warn if only one dir is listed in dfs.name.dir
 ---

 Key: HDFS-208
 URL: https://issues.apache.org/jira/browse/HDFS-208
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Allen Wittenauer
Assignee: Uma Maheswara Rao G
Priority: Minor
  Labels: newbie
 Fix For: 0.23.1

 Attachments: HDFS-208.patch, hdfs-208.patch


 The name node should warn that corruption may occur if only one directory is 
 listed in the dfs.name.dir setting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-208) name node should warn if only one dir is listed in dfs.name.dir

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153667#comment-13153667
 ] 

Hudson commented on HDFS-208:
-

Integrated in Hadoop-Hdfs-trunk-Commit #1363 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1363/])
HDFS-208. name node should warn if only one dir is listed in dfs.name.dir. 
Contributed by Uma Maheswara Rao G

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204119
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 name node should warn if only one dir is listed in dfs.name.dir
 ---

 Key: HDFS-208
 URL: https://issues.apache.org/jira/browse/HDFS-208
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Allen Wittenauer
Assignee: Uma Maheswara Rao G
Priority: Minor
  Labels: newbie
 Fix For: 0.23.1

 Attachments: HDFS-208.patch, hdfs-208.patch


 The name node should warn that corruption may occur if only one directory is 
 listed in the dfs.name.dir setting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-208) name node should warn if only one dir is listed in dfs.name.dir

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153668#comment-13153668
 ] 

Hudson commented on HDFS-208:
-

Integrated in Hadoop-Common-trunk-Commit #1289 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1289/])
HDFS-208. name node should warn if only one dir is listed in dfs.name.dir. 
Contributed by Uma Maheswara Rao G

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204119
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 name node should warn if only one dir is listed in dfs.name.dir
 ---

 Key: HDFS-208
 URL: https://issues.apache.org/jira/browse/HDFS-208
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Allen Wittenauer
Assignee: Uma Maheswara Rao G
Priority: Minor
  Labels: newbie
 Fix For: 0.23.1

 Attachments: HDFS-208.patch, hdfs-208.patch


 The name node should warn that corruption may occur if only one directory is 
 listed in the dfs.name.dir setting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2541) For a sufficiently large value of blocks, the DN Scanner may request a random number with a negative seed value.

2011-11-19 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153669#comment-13153669
 ] 

Harsh J commented on HDFS-2541:
---

Thanks Eli!

Would it also make sense if we 'warn' when block #s surpass a particularly 
large value? Could open a new ticket for this if you think it makes sense to do 
that.

 For a sufficiently large value of blocks, the DN Scanner may request a random 
 number with a negative seed value.
 

 Key: HDFS-2541
 URL: https://issues.apache.org/jira/browse/HDFS-2541
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20.1
Reporter: Harsh J
Assignee: Harsh J
 Fix For: 0.20.206.0, 0.23.1

 Attachments: BSBugTest.java, HDFS-2541.patch


 Running off 0.20-security, I noticed that one could get the following 
 exception when scanners are used:
 {code}
 DataXceiver 
 java.lang.IllegalArgumentException: n must be positive 
 at java.util.Random.nextInt(Random.java:250) 
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNewBlockScanTime(DataBlockScanner.java:251)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:268)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:432)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:122)
 {code}
 This is cause the period, determined in the DataBlockScanner (0.20+) or 
 BlockPoolSliceScanner (0.23+), is cast to an integer before its sent to a 
 Random.nextInt(...) call. For sufficiently large values of the long 'period', 
 the casted integer may be negative. This is not accounted for. I'll attach a 
 sample test that shows this possibility with the numbers.
 We should ensure we do a Math.abs(...) before we send it to the 
 Random.nextInt(...) call to avoid this.
 With this bug, the maximum # of blocks a scanner may hold in its blocksMap 
 without opening up the chance for beginning this exception (intermittent, as 
 blocks continue to grow) would be 3582718.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-208) name node should warn if only one dir is listed in dfs.name.dir

2011-11-19 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153670#comment-13153670
 ] 

Hadoop QA commented on HDFS-208:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12504409/hdfs-208.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1586//console

This message is automatically generated.

 name node should warn if only one dir is listed in dfs.name.dir
 ---

 Key: HDFS-208
 URL: https://issues.apache.org/jira/browse/HDFS-208
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Allen Wittenauer
Assignee: Uma Maheswara Rao G
Priority: Minor
  Labels: newbie
 Fix For: 0.23.1

 Attachments: HDFS-208.patch, hdfs-208.patch


 The name node should warn that corruption may occur if only one directory is 
 listed in the dfs.name.dir setting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-208) name node should warn if only one dir is listed in dfs.name.dir

2011-11-19 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-208:
-

   Resolution: Fixed
Fix Version/s: (was: 0.23.1)
   0.24.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed this. Thanks Uma!

 name node should warn if only one dir is listed in dfs.name.dir
 ---

 Key: HDFS-208
 URL: https://issues.apache.org/jira/browse/HDFS-208
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Allen Wittenauer
Assignee: Uma Maheswara Rao G
Priority: Minor
  Labels: newbie
 Fix For: 0.24.0

 Attachments: HDFS-208.patch, hdfs-208.patch


 The name node should warn that corruption may occur if only one directory is 
 listed in the dfs.name.dir setting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2430) The number of failed or low-resource volumes the NN can tolerate should be configurable

2011-11-19 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153672#comment-13153672
 ] 

Eli Collins commented on HDFS-2430:
---

Per HDFS-208 might be worth considering other warnings to log eg if there's 
only a single redundant resource left. Could also standardize the 
location/format of the log check and output to make life easier for monitoring 
tools.   Ditto for metrics (feel free to address in another jira).

 The number of failed or low-resource volumes the NN can tolerate should be 
 configurable
 ---

 Key: HDFS-2430
 URL: https://issues.apache.org/jira/browse/HDFS-2430
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-2430.patch, HDFS-2430.patch, HDFS-2430.patch


 Currently the number of failed or low-resource volumes the NN can tolerate is 
 effectively hard-coded at 1. It would be nice if this were configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2567) When 0 DNs are available, show a proper error when trying to browse DFS via web UI

2011-11-19 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153673#comment-13153673
 ] 

Eli Collins commented on HDFS-2567:
---

Why not explicitly check if there are any available DNs rather than catch the 
illegal argument exception?

 When 0 DNs are available, show a proper error when trying to browse DFS via 
 web UI
 --

 Key: HDFS-2567
 URL: https://issues.apache.org/jira/browse/HDFS-2567
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0
Reporter: Harsh J
 Fix For: 0.24.0

 Attachments: HDFS-2567.patch


 Trace:
 {code}
 HTTP ERROR 500
 Problem accessing /nn_browsedfscontent.jsp. Reason:
 n must be positive
 Caused by:
 java.lang.IllegalArgumentException: n must be positive
   at java.util.Random.nextInt(Random.java:250)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:556)
   at 
 org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:524)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.getRandomDatanode(NamenodeJspHelper.java:372)
   at 
 org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:383)
   at 
 org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70)
   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
   at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
   at 
 org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:940)
   at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
   at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
   at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
   at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
   at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
   at org.mortbay.jetty.Server.handle(Server.java:326)
   at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
   at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
   at 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
   at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 {code}
 Steps I did to run into this:
 1. Start a new NN, freshly formatted.
 2. No DNs yet.
 3. Visit the DFS browser link 
 {{http://localhost:50070/nn_browsedfscontent.jsp}}
 4. Above error shows itself
 5. {{hdfs dfs -touchz afile}}
 6. Re-visit, still shows the same issue.
 Perhaps its cause of no added DN so far.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2536) FSImageTransactionalStorageInspector has a bunch of unused imports

2011-11-19 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2536:
--

Target Version/s: 0.23.1
   Fix Version/s: (was: 0.24.0)

+1  mind generating a patch for 23 as well? That way merges won't be painful.

 FSImageTransactionalStorageInspector has a bunch of unused imports
 --

 Key: HDFS-2536
 URL: https://issues.apache.org/jira/browse/HDFS-2536
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Harsh J
Priority: Trivial
  Labels: newbie
 Attachments: HDFS-2536.FSImageTransactionalStorageInspector.patch, 
 HDFS-2536.patch


 Looks like it has 11 unused imports by my count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2536) Remove unused imports

2011-11-19 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153675#comment-13153675
 ] 

Eli Collins commented on HDFS-2536:
---

I've committed this to trunk, leaving open for the patch for 23.

 Remove unused imports
 -

 Key: HDFS-2536
 URL: https://issues.apache.org/jira/browse/HDFS-2536
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Harsh J
Priority: Trivial
  Labels: newbie
 Attachments: HDFS-2536.FSImageTransactionalStorageInspector.patch, 
 HDFS-2536.patch


 Looks like it has 11 unused imports by my count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2536) Remove unused imports

2011-11-19 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2536:
--

Hadoop Flags: Reviewed
 Summary: Remove unused imports  (was: 
FSImageTransactionalStorageInspector has a bunch of unused imports)

 Remove unused imports
 -

 Key: HDFS-2536
 URL: https://issues.apache.org/jira/browse/HDFS-2536
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Harsh J
Priority: Trivial
  Labels: newbie
 Attachments: HDFS-2536.FSImageTransactionalStorageInspector.patch, 
 HDFS-2536.patch


 Looks like it has 11 unused imports by my count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2536) Remove unused imports

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153676#comment-13153676
 ] 

Hudson commented on HDFS-2536:
--

Integrated in Hadoop-Hdfs-trunk-Commit #1364 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1364/])
HDFS-2536. Remove unused imports. Contributed by Harsh J

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204120
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupJournalManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DfsServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageStorageInspector.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageTransactionalStorageInspector.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileChecksumServlets.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileDataServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RenewDelegationTokenServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/RemoteEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocolR23Compatible/BlockCommandWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/EditsLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/EditsVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/EditsVisitorFactory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/TokenizerFactory.java
* 

[jira] [Commented] (HDFS-2536) Remove unused imports

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153677#comment-13153677
 ] 

Hudson commented on HDFS-2536:
--

Integrated in Hadoop-Common-trunk-Commit #1290 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1290/])
HDFS-2536. Remove unused imports. Contributed by Harsh J

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204120
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupJournalManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DfsServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageStorageInspector.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageTransactionalStorageInspector.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileChecksumServlets.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileDataServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RenewDelegationTokenServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/RemoteEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocolR23Compatible/BlockCommandWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/EditsLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/EditsVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/EditsVisitorFactory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/TokenizerFactory.java
* 

[jira] [Created] (HDFS-2569) DN decommissioning quirks

2011-11-19 Thread Harsh J (Created) (JIRA)
DN decommissioning quirks
-

 Key: HDFS-2569
 URL: https://issues.apache.org/jira/browse/HDFS-2569
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J


Decommissioning a node is working slightly odd in 0.23+:

The steps I did:

- Start HDFS via {{hdfs namenode}} and {{hdfs datanode}}. 1-node cluster.
- Zero files/blocks, so I go ahead and exclude-add my DN and do {{hdfs dfsadmin 
-refreshNodes}}
- I see the following log in NN tails, which is fine:
{code}
11/11/20 09:28:10 INFO util.HostsFileReader: Setting the includes file to 
11/11/20 09:28:10 INFO util.HostsFileReader: Setting the excludes file to 
build/test/excludes
11/11/20 09:28:10 INFO util.HostsFileReader: Refreshing hosts (include/exclude) 
list
11/11/20 09:28:10 INFO util.HostsFileReader: Adding 192.168.1.23 to the list of 
hosts from build/test/excludes
{code}
- However, DN log tail gets no new messages. DN still runs.
- The dfshealth.jsp page shows this table, which makes no sense -- why is there 
1 live and 1 dead?:

|Live Nodes|1 (Decommissioned: 1)|
|Dead Nodes|1 (Decommissioned: 0)|
|Decommissioning Nodes|0|

- The live nodes page shows this, meaning DN is still up and heartbeating but 
is decommissioned:

|Node|Last Contact|Admin State|
|192.168.1.23|0|Decommissioned|

- The dead nodes page shows this, and the link to the DN is broken cause the 
port is linked as -1. Also, showing 'false' for decommissioned makes no sense 
when live node page shows that it is already decommissioned:

|Node|Decommissioned|
|192.168.1.23|false|

Investigating if this is a quirk only observed when the DN had 0 blocks on it 
in sum total.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2502) hdfs-default.xml should include dfs.name.dir.restore

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153680#comment-13153680
 ] 

Hudson commented on HDFS-2502:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1314 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1314/])
HDFS-2502. hdfs-default.xml should include dfs.name.dir.restore. 
Contributed by Harsh J

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204117
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 hdfs-default.xml should include dfs.name.dir.restore
 

 Key: HDFS-2502
 URL: https://issues.apache.org/jira/browse/HDFS-2502
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.0
Reporter: Eli Collins
Assignee: Harsh J
Priority: Minor
  Labels: noob
 Fix For: 0.23.1

 Attachments: HDFS-2502.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2568:
--

Fix Version/s: (was: 0.24.0)
   0.23.1
 Hadoop Flags: Reviewed

+1  lgtm

 Use a set to manage child sockets in XceiverServer
 --

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.23.1

 Attachments: HDFS-2568.patch, HDFS-2568.patch


 Found while reading up for HDFS-2454, currently we maintain childSockets in a 
 DataXceiverServer as a MapSocket,Socket. This can very well be a 
 SetSocket data structure -- since the goal is easy removals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2568:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

I've committed this. Thanks Harsh!

 Use a set to manage child sockets in XceiverServer
 --

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.23.1

 Attachments: HDFS-2568.patch, HDFS-2568.patch


 Found while reading up for HDFS-2454, currently we maintain childSockets in a 
 DataXceiverServer as a MapSocket,Socket. This can very well be a 
 SetSocket data structure -- since the goal is easy removals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153683#comment-13153683
 ] 

Hudson commented on HDFS-2568:
--

Integrated in Hadoop-Hdfs-trunk-Commit #1365 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1365/])
HDFS-2568. Use a set to manage child sockets in XceiverServer. Contributed 
by Harsh J

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204122
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java


 Use a set to manage child sockets in XceiverServer
 --

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.23.1

 Attachments: HDFS-2568.patch, HDFS-2568.patch


 Found while reading up for HDFS-2454, currently we maintain childSockets in a 
 DataXceiverServer as a MapSocket,Socket. This can very well be a 
 SetSocket data structure -- since the goal is easy removals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153684#comment-13153684
 ] 

Hudson commented on HDFS-2568:
--

Integrated in Hadoop-Common-0.23-Commit #186 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/186/])
HDFS-2568. svn merge -c 1204122 from trunk

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204123
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/streaming
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/FileBench.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job


 Use a set to manage child sockets in XceiverServer
 --

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: 

[jira] [Updated] (HDFS-2454) Move maxXceiverCount check to before starting the thread in dataXceiver

2011-11-19 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2454:
--

  Description: We can hoist the maxXceiverCount out of 
DataXceiverServer#run, there's no need to check each time we accept a 
connection, we can accept when we create a thread.  (was:// Make sure 
the xceiver count is not exceeded
int curXceiverCount = datanode.getXceiverCount();
if (curXceiverCount  dataXceiverServer.maxXceiverCount) {
  throw new IOException(xceiverCount  + curXceiverCount
+  exceeds the limit of concurrent xcievers 
+ dataXceiverServer.maxXceiverCount);
})
Affects Version/s: 0.23.0
Fix Version/s: 0.23.1
   Issue Type: Improvement  (was: Bug)
 Hadoop Flags: Reviewed

+1 looks good. Given that we're checking the # threads agree we don't need to 
re-check that on each accept (this could cause multiple xceivers to exit).

 Move maxXceiverCount check to before starting the thread in dataXceiver
 ---

 Key: HDFS-2454
 URL: https://issues.apache.org/jira/browse/HDFS-2454
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.0
Reporter: Uma Maheswara Rao G
Assignee: Harsh J
Priority: Minor
 Fix For: 0.23.1

 Attachments: HDFS-2454.patch


 We can hoist the maxXceiverCount out of DataXceiverServer#run, there's no 
 need to check each time we accept a connection, we can accept when we create 
 a thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153686#comment-13153686
 ] 

Hudson commented on HDFS-2568:
--

Integrated in Hadoop-Hdfs-0.23-Commit #185 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/185/])
HDFS-2568. svn merge -c 1204122 from trunk

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204123
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/streaming
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/FileBench.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job


 Use a set to manage child sockets in XceiverServer
 --

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
   

[jira] [Commented] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153687#comment-13153687
 ] 

Hudson commented on HDFS-2568:
--

Integrated in Hadoop-Mapreduce-0.23-Commit #198 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/198/])
HDFS-2568. svn merge -c 1204122 from trunk

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204123
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/streaming
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/FileBench.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job


 Use a set to manage child sockets in XceiverServer
 --

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: 

[jira] [Commented] (HDFS-2568) Use a set to manage child sockets in XceiverServer

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153688#comment-13153688
 ] 

Hudson commented on HDFS-2568:
--

Integrated in Hadoop-Common-trunk-Commit #1291 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1291/])
HDFS-2568. Use a set to manage child sockets in XceiverServer. Contributed 
by Harsh J

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204122
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java


 Use a set to manage child sockets in XceiverServer
 --

 Key: HDFS-2568
 URL: https://issues.apache.org/jira/browse/HDFS-2568
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.24.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.23.1

 Attachments: HDFS-2568.patch, HDFS-2568.patch


 Found while reading up for HDFS-2454, currently we maintain childSockets in a 
 DataXceiverServer as a MapSocket,Socket. This can very well be a 
 SetSocket data structure -- since the goal is easy removals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2454) Move maxXceiverCount check to before starting the thread in dataXceiver

2011-11-19 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2454:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

I've committed this. Thanks Harsh!

 Move maxXceiverCount check to before starting the thread in dataXceiver
 ---

 Key: HDFS-2454
 URL: https://issues.apache.org/jira/browse/HDFS-2454
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.0
Reporter: Uma Maheswara Rao G
Assignee: Harsh J
Priority: Minor
 Fix For: 0.23.1

 Attachments: HDFS-2454.patch


 We can hoist the maxXceiverCount out of DataXceiverServer#run, there's no 
 need to check each time we accept a connection, we can accept when we create 
 a thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2454) Move maxXceiverCount check to before starting the thread in dataXceiver

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153691#comment-13153691
 ] 

Hudson commented on HDFS-2454:
--

Integrated in Hadoop-Hdfs-trunk-Commit #1366 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1366/])
HDFS-2454. Move maxXceiverCount check to before starting the thread in 
dataXceiver. Contributed by Harsh J

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204124
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java


 Move maxXceiverCount check to before starting the thread in dataXceiver
 ---

 Key: HDFS-2454
 URL: https://issues.apache.org/jira/browse/HDFS-2454
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.0
Reporter: Uma Maheswara Rao G
Assignee: Harsh J
Priority: Minor
 Fix For: 0.23.1

 Attachments: HDFS-2454.patch


 We can hoist the maxXceiverCount out of DataXceiverServer#run, there's no 
 need to check each time we accept a connection, we can accept when we create 
 a thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2541) For a sufficiently large value of blocks, the DN Scanner may request a random number with a negative seed value.

2011-11-19 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153692#comment-13153692
 ] 

Eli Collins commented on HDFS-2541:
---

Good idea, please open a new ticket.

 For a sufficiently large value of blocks, the DN Scanner may request a random 
 number with a negative seed value.
 

 Key: HDFS-2541
 URL: https://issues.apache.org/jira/browse/HDFS-2541
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20.1
Reporter: Harsh J
Assignee: Harsh J
 Fix For: 0.20.206.0, 0.23.1

 Attachments: BSBugTest.java, HDFS-2541.patch


 Running off 0.20-security, I noticed that one could get the following 
 exception when scanners are used:
 {code}
 DataXceiver 
 java.lang.IllegalArgumentException: n must be positive 
 at java.util.Random.nextInt(Random.java:250) 
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNewBlockScanTime(DataBlockScanner.java:251)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:268)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:432)
  
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:122)
 {code}
 This is cause the period, determined in the DataBlockScanner (0.20+) or 
 BlockPoolSliceScanner (0.23+), is cast to an integer before its sent to a 
 Random.nextInt(...) call. For sufficiently large values of the long 'period', 
 the casted integer may be negative. This is not accounted for. I'll attach a 
 sample test that shows this possibility with the numbers.
 We should ensure we do a Math.abs(...) before we send it to the 
 Random.nextInt(...) call to avoid this.
 With this bug, the maximum # of blocks a scanner may hold in its blocksMap 
 without opening up the chance for beginning this exception (intermittent, as 
 blocks continue to grow) would be 3582718.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2454) Move maxXceiverCount check to before starting the thread in dataXceiver

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153693#comment-13153693
 ] 

Hudson commented on HDFS-2454:
--

Integrated in Hadoop-Common-trunk-Commit #1292 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1292/])
HDFS-2454. Move maxXceiverCount check to before starting the thread in 
dataXceiver. Contributed by Harsh J

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204124
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java


 Move maxXceiverCount check to before starting the thread in dataXceiver
 ---

 Key: HDFS-2454
 URL: https://issues.apache.org/jira/browse/HDFS-2454
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.0
Reporter: Uma Maheswara Rao G
Assignee: Harsh J
Priority: Minor
 Fix For: 0.23.1

 Attachments: HDFS-2454.patch


 We can hoist the maxXceiverCount out of DataXceiverServer#run, there's no 
 need to check each time we accept a connection, we can accept when we create 
 a thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2454) Move maxXceiverCount check to before starting the thread in dataXceiver

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153694#comment-13153694
 ] 

Hudson commented on HDFS-2454:
--

Integrated in Hadoop-Hdfs-0.23-Commit #186 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/186/])
HDFS-2454. svn merge -c 1204124 from trunk

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204125
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/streaming
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/FileBench.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job


 Move maxXceiverCount check to before starting the thread in dataXceiver
 ---

 Key: HDFS-2454
 URL: https://issues.apache.org/jira/browse/HDFS-2454
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.0
Reporter: Uma Maheswara Rao G
 

[jira] [Created] (HDFS-2570) Add descriptions for dfs.*.https.address in hdfs-default.xml

2011-11-19 Thread Eli Collins (Created) (JIRA)
Add descriptions for dfs.*.https.address in hdfs-default.xml


 Key: HDFS-2570
 URL: https://issues.apache.org/jira/browse/HDFS-2570
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.0
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Trivial
 Attachments: hdfs-2570-1.patch

Let's add descriptions for dfs.*.https.address in hdfs-default.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2570) Add descriptions for dfs.*.https.address in hdfs-default.xml

2011-11-19 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-2570:
--

Attachment: hdfs-2570-1.patch

Patch attached.

 Add descriptions for dfs.*.https.address in hdfs-default.xml
 

 Key: HDFS-2570
 URL: https://issues.apache.org/jira/browse/HDFS-2570
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.23.0
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Trivial
 Attachments: hdfs-2570-1.patch


 Let's add descriptions for dfs.*.https.address in hdfs-default.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2454) Move maxXceiverCount check to before starting the thread in dataXceiver

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153697#comment-13153697
 ] 

Hudson commented on HDFS-2454:
--

Integrated in Hadoop-Common-0.23-Commit #187 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/187/])
HDFS-2454. svn merge -c 1204124 from trunk

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204125
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/streaming
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/FileBench.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job


 Move maxXceiverCount check to before starting the thread in dataXceiver
 ---

 Key: HDFS-2454
 URL: https://issues.apache.org/jira/browse/HDFS-2454
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.0
Reporter: Uma Maheswara Rao G
 

[jira] [Commented] (HDFS-2454) Move maxXceiverCount check to before starting the thread in dataXceiver

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153695#comment-13153695
 ] 

Hudson commented on HDFS-2454:
--

Integrated in Hadoop-Mapreduce-0.23-Commit #199 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/199/])
HDFS-2454. svn merge -c 1204124 from trunk

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204125
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/.gitignore
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/streaming
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/FileBench.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job


 Move maxXceiverCount check to before starting the thread in dataXceiver
 ---

 Key: HDFS-2454
 URL: https://issues.apache.org/jira/browse/HDFS-2454
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Affects Versions: 0.23.0
Reporter: Uma Maheswara Rao 

[jira] [Commented] (HDFS-208) name node should warn if only one dir is listed in dfs.name.dir

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153699#comment-13153699
 ] 

Hudson commented on HDFS-208:
-

Integrated in Hadoop-Mapreduce-trunk-Commit #1315 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1315/])
HDFS-208. name node should warn if only one dir is listed in dfs.name.dir. 
Contributed by Uma Maheswara Rao G

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204119
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 name node should warn if only one dir is listed in dfs.name.dir
 ---

 Key: HDFS-208
 URL: https://issues.apache.org/jira/browse/HDFS-208
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Allen Wittenauer
Assignee: Uma Maheswara Rao G
Priority: Minor
  Labels: newbie
 Fix For: 0.24.0

 Attachments: HDFS-208.patch, hdfs-208.patch


 The name node should warn that corruption may occur if only one directory is 
 listed in the dfs.name.dir setting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2536) Remove unused imports

2011-11-19 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153700#comment-13153700
 ] 

Hudson commented on HDFS-2536:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1315 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1315/])
HDFS-2536. Remove unused imports. Contributed by Harsh J

eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1204120
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupJournalManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/DfsServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageStorageInspector.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageTransactionalStorageInspector.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileChecksumServlets.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileDataServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RenewDelegationTokenServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/RemoteEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocolR23Compatible/BlockCommandWritable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/EditsLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/EditsVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/EditsVisitorFactory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/TokenizerFactory.java
* 

[jira] [Commented] (HDFS-2569) DN decommissioning quirks

2011-11-19 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13153701#comment-13153701
 ] 

Harsh J commented on HDFS-2569:
---

Restarting DN, makes it join back the cluster. No disallow exception is 
presented.

 DN decommissioning quirks
 -

 Key: HDFS-2569
 URL: https://issues.apache.org/jira/browse/HDFS-2569
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J

 Decommissioning a node is working slightly odd in 0.23+:
 The steps I did:
 - Start HDFS via {{hdfs namenode}} and {{hdfs datanode}}. 1-node cluster.
 - Zero files/blocks, so I go ahead and exclude-add my DN and do {{hdfs 
 dfsadmin -refreshNodes}}
 - I see the following log in NN tails, which is fine:
 {code}
 11/11/20 09:28:10 INFO util.HostsFileReader: Setting the includes file to 
 11/11/20 09:28:10 INFO util.HostsFileReader: Setting the excludes file to 
 build/test/excludes
 11/11/20 09:28:10 INFO util.HostsFileReader: Refreshing hosts 
 (include/exclude) list
 11/11/20 09:28:10 INFO util.HostsFileReader: Adding 192.168.1.23 to the list 
 of hosts from build/test/excludes
 {code}
 - However, DN log tail gets no new messages. DN still runs.
 - The dfshealth.jsp page shows this table, which makes no sense -- why is 
 there 1 live and 1 dead?:
 |Live Nodes|1 (Decommissioned: 1)|
 |Dead Nodes|1 (Decommissioned: 0)|
 |Decommissioning Nodes|0|
 - The live nodes page shows this, meaning DN is still up and heartbeating but 
 is decommissioned:
 |Node|Last Contact|Admin State|
 |192.168.1.23|0|Decommissioned|
 - The dead nodes page shows this, and the link to the DN is broken cause the 
 port is linked as -1. Also, showing 'false' for decommissioned makes no sense 
 when live node page shows that it is already decommissioned:
 |Node|Decommissioned|
 |192.168.1.23|false|
 Investigating if this is a quirk only observed when the DN had 0 blocks on it 
 in sum total.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >