[jira] [Commented] (HDFS-4525) Provide an API for knowing that whether file is closed or not.

2013-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625278#comment-13625278
 ] 

Hudson commented on HDFS-4525:
--

Integrated in Hadoop-Yarn-trunk #177 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/177/])
HDFS-4525. Provide an API for knowing that whether file is closed or not. 
Contributed by SreeHari. (Revision 1465434)

 Result = SUCCESS
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1465434
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java


 Provide an API for knowing that whether file is closed or not.
 --

 Key: HDFS-4525
 URL: https://issues.apache.org/jira/browse/HDFS-4525
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Uma Maheswara Rao G
Assignee: SreeHari
 Fix For: 3.0.0, 2.0.5-beta

 Attachments: HDFS-4525.patch, HDFS-4525.patch, HDFS-4525.patch, 
 HDFS-4525.patch, HDFS-4525.patch


 Currently recoverLease API will return true if file is already closed. 
 Otherwise it will trigger internalLease recovery and return false. It may 
 take some time to really complete this recovery and file to be closed 
 completely. So, there is noway for the users to wait correctly until file is 
 closed completely. 
 It would good if we have one API which says whether that file is closed or 
 not. So, that users can relay on that proceed further if and only if file is 
 closed completely.
 See the discussion in HBASE-7878

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4525) Provide an API for knowing that whether file is closed or not.

2013-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625344#comment-13625344
 ] 

Hudson commented on HDFS-4525:
--

Integrated in Hadoop-Hdfs-trunk #1366 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1366/])
HDFS-4525. Provide an API for knowing that whether file is closed or not. 
Contributed by SreeHari. (Revision 1465434)

 Result = FAILURE
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1465434
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java


 Provide an API for knowing that whether file is closed or not.
 --

 Key: HDFS-4525
 URL: https://issues.apache.org/jira/browse/HDFS-4525
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Uma Maheswara Rao G
Assignee: SreeHari
 Fix For: 3.0.0, 2.0.5-beta

 Attachments: HDFS-4525.patch, HDFS-4525.patch, HDFS-4525.patch, 
 HDFS-4525.patch, HDFS-4525.patch


 Currently recoverLease API will return true if file is already closed. 
 Otherwise it will trigger internalLease recovery and return false. It may 
 take some time to really complete this recovery and file to be closed 
 completely. So, there is noway for the users to wait correctly until file is 
 closed completely. 
 It would good if we have one API which says whether that file is closed or 
 not. So, that users can relay on that proceed further if and only if file is 
 closed completely.
 See the discussion in HBASE-7878

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4669) org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM java

2013-04-08 Thread Tian Hong Wang (JIRA)
Tian Hong Wang created HDFS-4669:


 Summary: 
org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM java
 Key: HDFS-4669
 URL: https://issues.apache.org/jira/browse/HDFS-4669
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
 Fix For: 2.0.3-alpha


TestBlockPoolManager unit test fails with the following error message using IBM 
java:
testFederationRefresh(org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager)
  Time elapsed: 27 sec   FAILURE!
org.junit.ComparisonFailure: expected:stop #[1
refresh #2]
 but was:stop #[2
refresh #1]


The root cause is:
(1)if we want to remove the first NS, keep the second NS, it should be 
conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns2), not 
conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns1).

(2)Since HashMap  HashSet store the data in the random order way, so in ibm 
java  Oracle java, HashMap get the random order key, value that causing the 
random ns1ns2 value.  So in the code, it should use LinkedHashMap  
LinkedHashSet to keep the original order.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4669) org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM java

2013-04-08 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4669:
-

Target Version/s: 2.0.3-alpha
  Status: Patch Available  (was: Open)

 org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM 
 java
 

 Key: HDFS-4669
 URL: https://issues.apache.org/jira/browse/HDFS-4669
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-4669.patch


 TestBlockPoolManager unit test fails with the following error message using 
 IBM java:
 testFederationRefresh(org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager)
   Time elapsed: 27 sec   FAILURE!
 org.junit.ComparisonFailure: expected:stop #[1
 refresh #2]
  but was:stop #[2
 refresh #1]
 
 The root cause is:
 (1)if we want to remove the first NS, keep the second NS, it should be 
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns2), not 
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns1).
 (2)Since HashMap  HashSet store the data in the random order way, so in ibm 
 java  Oracle java, HashMap get the random order key, value that causing 
 the random ns1ns2 value.  So in the code, it should use LinkedHashMap  
 LinkedHashSet to keep the original order.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4669) org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM java

2013-04-08 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4669:
-

Attachment: HADOOP-4669.patch

 org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM 
 java
 

 Key: HDFS-4669
 URL: https://issues.apache.org/jira/browse/HDFS-4669
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-4669.patch


 TestBlockPoolManager unit test fails with the following error message using 
 IBM java:
 testFederationRefresh(org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager)
   Time elapsed: 27 sec   FAILURE!
 org.junit.ComparisonFailure: expected:stop #[1
 refresh #2]
  but was:stop #[2
 refresh #1]
 
 The root cause is:
 (1)if we want to remove the first NS, keep the second NS, it should be 
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns2), not 
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns1).
 (2)Since HashMap  HashSet store the data in the random order way, so in ibm 
 java  Oracle java, HashMap get the random order key, value that causing 
 the random ns1ns2 value.  So in the code, it should use LinkedHashMap  
 LinkedHashSet to keep the original order.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4477) Secondary namenode may retain old tokens

2013-04-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625377#comment-13625377
 ] 

Daryn Sharp commented on HDFS-4477:
---

Yes, I'm still working on it but it's been on the back-burner.  I hope to get 
back to it this week.

 Secondary namenode may retain old tokens
 

 Key: HDFS-4477
 URL: https://issues.apache.org/jira/browse/HDFS-4477
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Kihwal Lee
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-4477.patch, HDFS-4477.patch


 Upon inspection of a fsimage created by a secondary namenode, we've 
 discovered it contains very old tokens. These are probably the ones that were 
 not explicitly canceled.  It may be related to the optimization done to avoid 
 loading fsimage from scratch every time checkpointing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4525) Provide an API for knowing that whether file is closed or not.

2013-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625393#comment-13625393
 ] 

Hudson commented on HDFS-4525:
--

Integrated in Hadoop-Mapreduce-trunk #1393 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1393/])
HDFS-4525. Provide an API for knowing that whether file is closed or not. 
Contributed by SreeHari. (Revision 1465434)

 Result = SUCCESS
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1465434
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java


 Provide an API for knowing that whether file is closed or not.
 --

 Key: HDFS-4525
 URL: https://issues.apache.org/jira/browse/HDFS-4525
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Uma Maheswara Rao G
Assignee: SreeHari
 Fix For: 3.0.0, 2.0.5-beta

 Attachments: HDFS-4525.patch, HDFS-4525.patch, HDFS-4525.patch, 
 HDFS-4525.patch, HDFS-4525.patch


 Currently recoverLease API will return true if file is already closed. 
 Otherwise it will trigger internalLease recovery and return false. It may 
 take some time to really complete this recovery and file to be closed 
 completely. So, there is noway for the users to wait correctly until file is 
 closed completely. 
 It would good if we have one API which says whether that file is closed or 
 not. So, that users can relay on that proceed further if and only if file is 
 closed completely.
 See the discussion in HBASE-7878

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3447) StandbyException should not be logged at ERROR level on server

2013-04-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625404#comment-13625404
 ] 

Daryn Sharp commented on HDFS-3447:
---

The logging was at the UGI level to:
# Combat code that silently swallows exceptions which severely hampers 
debugging efforts
# Know the UGI context when an exception occurs

It can be a bit noisy, but if changed, I'd prefer dropping it to the INFO 
level.  If it becomes a DEBUG, it prevents a post-mortem log scan after a 
problem is reported since running in production with DEBUG enabled isn't 
feasible.

 StandbyException should not be logged at ERROR level on server
 --

 Key: HDFS-3447
 URL: https://issues.apache.org/jira/browse/HDFS-3447
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha
Affects Versions: 2.0.0-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
  Labels: newbie

 Currently, the standby NN will log StandbyExceptions at ERROR level any time 
 a client tries to connect to it. So, if the second NN in an HA pair is 
 active, the first NN will spew a lot of these errors in the log, as each 
 client gets redirected to the proper NN. Instead, this should be at INFO 
 level, and should probably be logged in a less scary manner (eg Received 
 READ request from client 1.2.3.4, but in Standby state. Redirecting client to 
 other NameNode.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4669) org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM java

2013-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625453#comment-13625453
 ] 

Hadoop QA commented on HDFS-4669:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12577529/HADOOP-4669.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4196//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4196//console

This message is automatically generated.

 org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM 
 java
 

 Key: HDFS-4669
 URL: https://issues.apache.org/jira/browse/HDFS-4669
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-4669.patch


 TestBlockPoolManager unit test fails with the following error message using 
 IBM java:
 testFederationRefresh(org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager)
   Time elapsed: 27 sec   FAILURE!
 org.junit.ComparisonFailure: expected:stop #[1
 refresh #2]
  but was:stop #[2
 refresh #1]
 
 The root cause is:
 (1)if we want to remove the first NS, keep the second NS, it should be 
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns2), not 
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns1).
 (2)Since HashMap  HashSet store the data in the random order way, so in ibm 
 java  Oracle java, HashMap get the random order key, value that causing 
 the random ns1ns2 value.  So in the code, it should use LinkedHashMap  
 LinkedHashSet to keep the original order.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4339) Persist inode id in fsimage and editlog

2013-04-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625580#comment-13625580
 ] 

Suresh Srinivas commented on HDFS-4339:
---

Comments:
# Remove this should not happen. The exception later captures the reason why 
it should not happen.
# Add LOG.isDebugEnabled() checks around LOG.debugs added.


 Persist inode id in fsimage and editlog
 ---

 Key: HDFS-4339
 URL: https://issues.apache.org/jira/browse/HDFS-4339
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: editsStored, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch


  Persist inode id in fsimage and editlog and update offline viewers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4434) Provide a mapping from INodeId to INode

2013-04-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625615#comment-13625615
 ] 

Suresh Srinivas commented on HDFS-4434:
---

For a 1GB total java heapsize, the code attempts to allocated 32MB memory to 
GSet. However, after running junit tests, the available memory is very low:
{noformat}
2013-04-06 00:44:04,716 INFO  util.GSet (LightWeightGSet.java:init(89)) - 
actual LightWeightGSet size is 4194304 entries
2013-04-06 00:44:04,717 INFO  util.GSet (LightWeightGSet.java:init(90)) - 
maxMemory 1011286016
2013-04-06 00:44:04,717 INFO  util.GSet (LightWeightGSet.java:init(91)) - 
freeMemory 20897432
2013-04-06 00:44:04,717 INFO  util.GSet (LightWeightGSet.java:init(92)) - 
totalMemory 1011286016
{noformat}

It looks like some of the tests are no probably setting their fields to null. 
This results in unused objects lingering on the heap and OOM.

 Provide a mapping from INodeId to INode
 ---

 Key: HDFS-4434
 URL: https://issues.apache.org/jira/browse/HDFS-4434
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Suresh Srinivas
 Attachments: HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch


 This JIRA is to provide a way to access the INode via its id. The proposed 
 solution is to have an in-memory mapping from INodeId to INode. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4434) Provide a mapping from INodeId to INode

2013-04-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4434:
--

Attachment: HDFS-4434.patch

Updated patch to set the member objects to null during tearDown in junit3 tests.

 Provide a mapping from INodeId to INode
 ---

 Key: HDFS-4434
 URL: https://issues.apache.org/jira/browse/HDFS-4434
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Suresh Srinivas
 Attachments: HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch


 This JIRA is to provide a way to access the INode via its id. The proposed 
 solution is to have an in-memory mapping from INodeId to INode. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4339) Persist inode id in fsimage and editlog

2013-04-08 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4339:
-

Attachment: HDFS-4339.patch

Thanks Suresh. The patch is updated.

 Persist inode id in fsimage and editlog
 ---

 Key: HDFS-4339
 URL: https://issues.apache.org/jira/browse/HDFS-4339
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: editsStored, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch


  Persist inode id in fsimage and editlog and update offline viewers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Elliott Clark (JIRA)
Elliott Clark created HDFS-4670:
---

 Summary: Style Hadoop HDFS web ui's with Twitter's bootstrap.
 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Elliott Clark
Priority: Minor


A users' first experience of Apache Hadoop is often looking at the web ui.  
This should give the user confidence that the project is usable and recently 
current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625687#comment-13625687
 ] 

Elliott Clark commented on HDFS-4670:
-


[~adityaacharya] [~andrew.wang] And I have a worked on a patch that adds 
bootsrap to the HDFS web ui's.  A mapred version is planned in the furture.

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Elliott Clark
Priority: Minor

 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and recently 
 current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HDFS-4670:


Attachment: HDFS-4670-0.patch

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Elliott Clark
Priority: Minor
 Attachments: HDFS-4670-0.patch


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and recently 
 current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HDFS-4670:


Description: A users' first experience of Apache Hadoop is often looking at 
the web ui.  This should give the user confidence that the project is usable 
and relatively current.  (was: A users' first experience of Apache Hadoop is 
often looking at the web ui.  This should give the user confidence that the 
project is usable and recently current.)

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Elliott Clark
Priority: Minor
 Attachments: HDFS-4670-0.patch


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and 
 relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HDFS-4670:


Status: Patch Available  (was: Open)

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Elliott Clark
Priority: Minor
 Attachments: HDFS-4670-0.patch


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and 
 relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reassigned HDFS-4670:
-

Assignee: Elliott Clark

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Attachments: HDFS-4670-0.patch


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and 
 relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4671) DFSAdmin fetchImage should require superuser privilege even when security is not enabled

2013-04-08 Thread Stephen Chu (JIRA)
Stephen Chu created HDFS-4671:
-

 Summary: DFSAdmin fetchImage should require superuser privilege 
even when security is not enabled
 Key: HDFS-4671
 URL: https://issues.apache.org/jira/browse/HDFS-4671
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Stephen Chu


When security is not enabled, non-superusers can fetch the fsimage. This is 
problematic because the non-superusers can then process the fsimage for 
contents the user should not have access to.

For example, schu is not a superuser and does not have access to 
hdfs://user/hdfs/. However, schu can still fetch the fsimage and run the 
OfflineImageViewer on the fsimage to examine the contents of hdfs://user/hdfs/.

{code}
[schu@hdfs-vanilla-1 images]$ hadoop fs -ls /user/hdfs
ls: Permission denied: user=schu, access=READ_EXECUTE, 
inode=/user/hdfs:hdfs:supergroup:drwx--
[schu@hdfs-vanilla-1 images]$ hdfs dfsadmin -fetchImage ~/images/
13/04/08 12:45:20 INFO namenode.TransferFsImage: Opening connection to 
http://hdfs-vanilla-1.ent.cloudera.com:50070/getimage?getimage=1txid=latest
13/04/08 12:45:21 INFO namenode.TransferFsImage: Transfer took 0.91s at 91.61 
KB/s
[schu@hdfs-vanilla-1 images]$ hdfs oiv -i ~/images/fsimage_0947148 
-o ~/images/oiv.out
{code}

When kerberos authentication is enabled, superuser privilege is enforced:
{code}
[testuser@hdfs-secure-1 ~]$ hdfs dfsadmin -fetchImage ~/images/
13/04/08 12:48:23 INFO namenode.TransferFsImage: Opening connection to 
http://hdfs-secure-1.ent.cloudera.com:50070/getimage?getimage=1txid=latest
13/04/08 12:48:23 ERROR security.UserGroupInformation: 
PriviledgedActionException as:testu...@ent.cloudera.com (auth:KERBEROS) 
cause:org.apache.hadoop.hdfs.server.namenode.TransferFsImage$HttpGetFailedException:
 Image transfer servlet at 
http://hdfs-secure-1.ent.cloudera.com:50070/getimage?getimage=1txid=latest 
failed with status code 403
Response message:
Only Namenode, Secondary Namenode, and administrators may access this servlet
fetchImage: Image transfer servlet at 
http://hdfs-secure-1.ent.cloudera.com:50070/getimage?getimage=1txid=latest 
failed with status code 403
Response message:
Only Namenode, Secondary Namenode, and administrators may access this servlet
[testuser@hdfs-secure-1 ~]$ 
{code}

We should still enforce checking privileges when kerberos authentication is 
disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3981) access time is set without holding FSNamesystem write lock

2013-04-08 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625729#comment-13625729
 ] 

Todd Lipcon commented on HDFS-3981:
---

Failed test looks like HDFS-3267 (unrelated). I'll commit this momentarily.

 access time is set without holding FSNamesystem write lock
 --

 Key: HDFS-3981
 URL: https://issues.apache.org/jira/browse/HDFS-3981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.3, 2.0.3-alpha, 0.23.5
Reporter: Xiaobo Peng
Assignee: Xiaobo Peng
 Attachments: HDFS-3981-branch-0.23.4.patch, 
 HDFS-3981-branch-0.23.patch, HDFS-3981-branch-2.patch, HDFS-3981-trunk.patch, 
 hdfs-3981.txt


 Incorrect condition in {{FSNamesystem.getBlockLocatoins()}} can lead to 
 updating times without write lock. In most cases this condition will force 
 {{FSNamesystem.getBlockLocatoins()}} to hold write lock, even if times do not 
 need to be updated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3981) access time is set without holding FSNamesystem write lock

2013-04-08 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3981:
--

Fix Version/s: 2.0.5-beta
   3.0.0
 Hadoop Flags: Reviewed

Committed to trunk and branch-2 (for 2.0.5). 0.23 maintainers -- you guys want 
to backport to 0.23? I'll leave it open just in case.

 access time is set without holding FSNamesystem write lock
 --

 Key: HDFS-3981
 URL: https://issues.apache.org/jira/browse/HDFS-3981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.3, 2.0.3-alpha, 0.23.5
Reporter: Xiaobo Peng
Assignee: Xiaobo Peng
 Fix For: 3.0.0, 2.0.5-beta

 Attachments: HDFS-3981-branch-0.23.4.patch, 
 HDFS-3981-branch-0.23.patch, HDFS-3981-branch-2.patch, HDFS-3981-trunk.patch, 
 hdfs-3981.txt


 Incorrect condition in {{FSNamesystem.getBlockLocatoins()}} can lead to 
 updating times without write lock. In most cases this condition will force 
 {{FSNamesystem.getBlockLocatoins()}} to hold write lock, even if times do not 
 need to be updated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3447) StandbyException should not be logged at ERROR level on server

2013-04-08 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625734#comment-13625734
 ] 

Todd Lipcon commented on HDFS-3447:
---

It seems like the code which is swallowing the exceptions should just be 
changed to log before swallowing, rather than putting this generic log here. 
Perhaps we can introduce a new overload of doAs() which takes a LOG object, for 
the places where we really want to log the exception alongside the UGI string?

If we change it to DEBUG, you can still set up your log4j properties to enable 
this particular class's DEBUG level without doing so everywhere.

 StandbyException should not be logged at ERROR level on server
 --

 Key: HDFS-3447
 URL: https://issues.apache.org/jira/browse/HDFS-3447
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha
Affects Versions: 2.0.0-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
  Labels: newbie

 Currently, the standby NN will log StandbyExceptions at ERROR level any time 
 a client tries to connect to it. So, if the second NN in an HA pair is 
 active, the first NN will spew a lot of these errors in the log, as each 
 client gets redirected to the proper NN. Instead, this should be at INFO 
 level, and should probably be logged in a less scary manner (eg Received 
 READ request from client 1.2.3.4, but in Standby state. Redirecting client to 
 other NameNode.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4339) Persist inode id in fsimage and editlog

2013-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625746#comment-13625746
 ] 

Hadoop QA commented on HDFS-4339:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12577583/HDFS-4339.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4198//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4198//console

This message is automatically generated.

 Persist inode id in fsimage and editlog
 ---

 Key: HDFS-4339
 URL: https://issues.apache.org/jira/browse/HDFS-4339
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: editsStored, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch


  Persist inode id in fsimage and editlog and update offline viewers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3981) access time is set without holding FSNamesystem write lock

2013-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625747#comment-13625747
 ] 

Hudson commented on HDFS-3981:
--

Integrated in Hadoop-trunk-Commit #3576 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3576/])
HDFS-3981. Fix handling of FSN lock in getBlockLocations. Contributed by 
Xiaobo Peng and Todd Lipcon. (Revision 1465751)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1465751
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/MockitoUtil.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSetTimes.java


 access time is set without holding FSNamesystem write lock
 --

 Key: HDFS-3981
 URL: https://issues.apache.org/jira/browse/HDFS-3981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.3, 2.0.3-alpha, 0.23.5
Reporter: Xiaobo Peng
Assignee: Xiaobo Peng
 Fix For: 3.0.0, 2.0.5-beta

 Attachments: HDFS-3981-branch-0.23.4.patch, 
 HDFS-3981-branch-0.23.patch, HDFS-3981-branch-2.patch, HDFS-3981-trunk.patch, 
 hdfs-3981.txt


 Incorrect condition in {{FSNamesystem.getBlockLocatoins()}} can lead to 
 updating times without write lock. In most cases this condition will force 
 {{FSNamesystem.getBlockLocatoins()}} to hold write lock, even if times do not 
 need to be updated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4434) Provide a mapping from INodeId to INode

2013-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625749#comment-13625749
 ] 

Hadoop QA commented on HDFS-4434:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12577579/HDFS-4434.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestCheckpoint

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4197//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4197//console

This message is automatically generated.

 Provide a mapping from INodeId to INode
 ---

 Key: HDFS-4434
 URL: https://issues.apache.org/jira/browse/HDFS-4434
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Suresh Srinivas
 Attachments: HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch


 This JIRA is to provide a way to access the INode via its id. The proposed 
 solution is to have an in-memory mapping from INodeId to INode. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4434) Provide a mapping from INodeId to INode

2013-04-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4434:
--

Attachment: HDFS-4434.patch

Updated patch with similar changes to set the MiniDFSCluster reference to null 
in TestCheckPoint.

 Provide a mapping from INodeId to INode
 ---

 Key: HDFS-4434
 URL: https://issues.apache.org/jira/browse/HDFS-4434
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Suresh Srinivas
 Attachments: HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch


 This JIRA is to provide a way to access the INode via its id. The proposed 
 solution is to have an in-memory mapping from INodeId to INode. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4434) Provide a mapping from INodeId to INode

2013-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625795#comment-13625795
 ] 

Hadoop QA commented on HDFS-4434:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12577616/HDFS-4434.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4200//console

This message is automatically generated.

 Provide a mapping from INodeId to INode
 ---

 Key: HDFS-4434
 URL: https://issues.apache.org/jira/browse/HDFS-4434
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Suresh Srinivas
 Attachments: HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch


 This JIRA is to provide a way to access the INode via its id. The proposed 
 solution is to have an in-memory mapping from INodeId to INode. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4672) Support tiered storage policies

2013-04-08 Thread Andrew Purtell (JIRA)
Andrew Purtell created HDFS-4672:


 Summary: Support tiered storage policies
 Key: HDFS-4672
 URL: https://issues.apache.org/jira/browse/HDFS-4672
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, libhdfs, namenode
Reporter: Andrew Purtell


We would like to be able to create certain files on certain storage device 
classes (e.g. spinning media, solid state devices, RAM disk, non-volatile 
memory). HDFS-2832 enables heterogeneous storage at the DataNode, so the 
NameNode can gain awareness of what different storage options are available in 
the pool and where they are located, but no API is provided for clients or 
block placement plugins to perform device aware block placement. We would like 
to propose a set of extensions that also have broad applicability to use cases 
where storage device affinity is important:
 
- Add an enum of generic storage device classes, borrowing from current 
taxonomy of the storage industry
 
- Augment DataNode volume metadata in storage reports with this enum
 
- Extend the namespace so pluggable block policies can be specified on a 
directory and storage device class can be tracked in the Inode. Perhaps this 
could be a larger discussion on adding support for extended attributes in the 
HDFS namespace. The Inode should track both the storage device class hint and 
the current actual storage device class. FileStatus should expose this 
information (or xattrs in general) to clients.
 
- Extend the pluggable block policy framework so policies can also consider, 
and specify, affinity for a particular storage device class
 
- Extend the file creation API to accept a storage device class affinity hint. 
Such a hint can be supplied directly as a parameter, or, if we are considering 
extended attribute support, then instead as one of a set of xattrs. The hint 
would be stored in the namespace and also used by the client to indicate to the 
NameNode/block placement policy/DataNode constraints on block placement. 
Furthermore, if xattrs or device storage class affinity hints are associated 
with directories, then the NameNode should provide the storage device affinity 
hint to the client in the create API response, so the client can provide the 
appropriate hint to DataNodes when writing new blocks.
 
- The list of candidate DataNodes for new blocks supplied by the NameNode to 
clients should be weighted/sorted by availability of the desired storage device 
class. 
 
- Block replication should consider storage device affinity hints. If a client 
move()s a file from a location under a path with affinity hint X to under a 
path with affinity hint Y, then all blocks currently residing on media X should 
be eventually replicated onto media Y with the then excess replicas on media X 
deleted.
 
- Introduce the concept of degraded path: a path can be degraded if a block 
placement policy is forced to abandon a constraint in order to persist the 
block, when there may not be available space on the desired device class, or to 
maintain the minimum necessary replication factor. This concept is distinct 
from the corrupt path, where one or more blocks are missing. Paths in degraded 
state should be periodically reevaluated for re-replication.
 
- The FSShell should be extended with commands for changing the storage device 
class hint for a directory or file. 
 
- Clients like DistCP which compare metadata should be extended to be aware of 
the storage device class hint. For DistCP specifically, there should be an 
option to ignore the storage device class hints, enabled by default.
 
Suggested semantics:
 
- The default storage device class should be the null class, or simply the 
“default class”, for all cases where a hint is not available. This should be 
configurable. hdfs-defaults.xml could provide the default as spinning media.
 
- A storage device class hint should be provided (and is necessary) only when 
the default is not sufficient.
 
- For backwards compatibility, any FSImage or edit log entry lacking a  storage 
device class hint is interpreted as having affinity for the null class.
 
- All blocks for a given file share the same storage device class. If the 
replication factor for this file is increased the replicas should all be placed 
on the same storage device class.
 
- If one or more blocks for a given file cannot be placed on the required 
device class, then the file is marked as degraded. Files in degraded state 
should be periodically reevaluated for re-replication. 
 
- A directory and path can only have one storage device affinity hint. If the 
file inode specifies a hint, this is used, otherwise we walk up the path until 
a hint is found and use that one, otherwise the default storage class is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA 

[jira] [Updated] (HDFS-4434) Provide a mapping from INodeId to INode

2013-04-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4434:
--

Attachment: HDFS-4434.patch

 Provide a mapping from INodeId to INode
 ---

 Key: HDFS-4434
 URL: https://issues.apache.org/jira/browse/HDFS-4434
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Suresh Srinivas
 Attachments: HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch


 This JIRA is to provide a way to access the INode via its id. The proposed 
 solution is to have an in-memory mapping from INodeId to INode. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3940) Add Gset#clear method

2013-04-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-3940:
--

Assignee: Suresh Srinivas  (was: Eli Collins)

 Add Gset#clear method
 -

 Key: HDFS-3940
 URL: https://issues.apache.org/jira/browse/HDFS-3940
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Suresh Srinivas
Priority: Minor

 Per HDFS-3936 it would be useful if GSet has a clear method so BM#close could 
 clear out the LightWeightGSet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3447) StandbyException should not be logged at ERROR level on server

2013-04-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625846#comment-13625846
 ] 

Daryn Sharp commented on HDFS-3447:
---

I know it's noisy at times, but it's helped me debug so many problems...  Prior 
to HADOOP-7853, it's not that it used to log as DEBUG, it didn't log at all.  I 
generally agree the caller should be expected to log the exception, but it 
won't be easy to track down every misbehaving caller (in the sense of not 
logging when it should).  Plus callers rarely if ever prepend the UGI to the 
message when they log.

Kihwal and I took at look at a few options.  If it's moved to DEBUG, enabling 
that in UGI is going to spew a lot of undesired messages.  If it's moved to 
INFO, with the default for UGI to be WARN, then other valuable logging will be 
lost.  Adding another variant of doAs is undesirable because it's effectively 
the same as removing the logging entirely, plus someone like me would want 
every caller to pass the logging object.

So what we came up with is would it make sense to have a second logger object 
in UGI, ex. a detailed logger, that would be used by doAs?

 StandbyException should not be logged at ERROR level on server
 --

 Key: HDFS-3447
 URL: https://issues.apache.org/jira/browse/HDFS-3447
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha
Affects Versions: 2.0.0-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
  Labels: newbie

 Currently, the standby NN will log StandbyExceptions at ERROR level any time 
 a client tries to connect to it. So, if the second NN in an HA pair is 
 active, the first NN will spew a lot of these errors in the log, as each 
 client gets redirected to the proper NN. Instead, this should be at INFO 
 level, and should probably be logged in a less scary manner (eg Received 
 READ request from client 1.2.3.4, but in Standby state. Redirecting client to 
 other NameNode.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625849#comment-13625849
 ] 

Hadoop QA commented on HDFS-4670:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12577594/HDFS-4670-0.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestHAWebUI
  org.apache.hadoop.hdfs.TestMissingBlocksAlert

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4199//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4199//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4199//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4199//console

This message is automatically generated.

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Attachments: HDFS-4670-0.patch


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and 
 relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2847) NamenodeProtocol#getBlocks() should use DatanodeID as an argument instead of DatanodeInfo

2013-04-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-2847:
--

Hadoop Flags: Incompatible change

Marking this issue as incompatible given the change from DataNodeInfo to 
DataNodeID.

 NamenodeProtocol#getBlocks() should use DatanodeID as an argument instead of 
 DatanodeInfo
 -

 Key: HDFS-2847
 URL: https://issues.apache.org/jira/browse/HDFS-2847
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 0.24.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 0.24.0

 Attachments: HDFS-2847.txt, HDFS-2847.txt, HDFS-2847.txt


 DatanodeID is sufficient for identifying a Datanode. DatanodeInfo has a lot 
 of information that is not required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4671) DFSAdmin fetchImage should require superuser privilege even when security is not enabled

2013-04-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625861#comment-13625861
 ] 

Daryn Sharp commented on HDFS-4671:
---

I can see the argument for this change, but the user can sidestep the 
authorization by setting the env HADOOP_USER_NAME=hdfs so I'm not sure there's 
much value.

 DFSAdmin fetchImage should require superuser privilege even when security is 
 not enabled
 

 Key: HDFS-4671
 URL: https://issues.apache.org/jira/browse/HDFS-4671
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Stephen Chu

 When security is not enabled, non-superusers can fetch the fsimage. This is 
 problematic because the non-superusers can then process the fsimage for 
 contents the user should not have access to.
 For example, schu is not a superuser and does not have access to 
 hdfs://user/hdfs/. However, schu can still fetch the fsimage and run the 
 OfflineImageViewer on the fsimage to examine the contents of 
 hdfs://user/hdfs/.
 {code}
 [schu@hdfs-vanilla-1 images]$ hadoop fs -ls /user/hdfs
 ls: Permission denied: user=schu, access=READ_EXECUTE, 
 inode=/user/hdfs:hdfs:supergroup:drwx--
 [schu@hdfs-vanilla-1 images]$ hdfs dfsadmin -fetchImage ~/images/
 13/04/08 12:45:20 INFO namenode.TransferFsImage: Opening connection to 
 http://hdfs-vanilla-1.ent.cloudera.com:50070/getimage?getimage=1txid=latest
 13/04/08 12:45:21 INFO namenode.TransferFsImage: Transfer took 0.91s at 91.61 
 KB/s
 [schu@hdfs-vanilla-1 images]$ hdfs oiv -i 
 ~/images/fsimage_0947148 -o ~/images/oiv.out
 {code}
 When kerberos authentication is enabled, superuser privilege is enforced:
 {code}
 [testuser@hdfs-secure-1 ~]$ hdfs dfsadmin -fetchImage ~/images/
 13/04/08 12:48:23 INFO namenode.TransferFsImage: Opening connection to 
 http://hdfs-secure-1.ent.cloudera.com:50070/getimage?getimage=1txid=latest
 13/04/08 12:48:23 ERROR security.UserGroupInformation: 
 PriviledgedActionException as:testu...@ent.cloudera.com (auth:KERBEROS) 
 cause:org.apache.hadoop.hdfs.server.namenode.TransferFsImage$HttpGetFailedException:
  Image transfer servlet at 
 http://hdfs-secure-1.ent.cloudera.com:50070/getimage?getimage=1txid=latest 
 failed with status code 403
 Response message:
 Only Namenode, Secondary Namenode, and administrators may access this servlet
 fetchImage: Image transfer servlet at 
 http://hdfs-secure-1.ent.cloudera.com:50070/getimage?getimage=1txid=latest 
 failed with status code 403
 Response message:
 Only Namenode, Secondary Namenode, and administrators may access this servlet
 [testuser@hdfs-secure-1 ~]$ 
 {code}
 We should still enforce checking privileges when kerberos authentication is 
 disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4339) Persist inode id in fsimage and editlog

2013-04-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625865#comment-13625865
 ] 

Suresh Srinivas commented on HDFS-4339:
---

+1 for the change. I will commit it shortly.

 Persist inode id in fsimage and editlog
 ---

 Key: HDFS-4339
 URL: https://issues.apache.org/jira/browse/HDFS-4339
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: editsStored, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch


  Persist inode id in fsimage and editlog and update offline viewers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3940) Add Gset#clear method

2013-04-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-3940:
--

Attachment: HDFS-3940.patch

Attached patch makes the following changes:
# Adds clear to GSet and the implementations of GSet.
# When namenode is shutdown blockmap and FSDirectory clear their maps.

 Add Gset#clear method
 -

 Key: HDFS-3940
 URL: https://issues.apache.org/jira/browse/HDFS-3940
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Suresh Srinivas
Priority: Minor
 Attachments: HDFS-3940.patch


 Per HDFS-3936 it would be useful if GSet has a clear method so BM#close could 
 clear out the LightWeightGSet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3940) Add Gset#clear method

2013-04-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-3940:
--

Status: Patch Available  (was: Open)

 Add Gset#clear method
 -

 Key: HDFS-3940
 URL: https://issues.apache.org/jira/browse/HDFS-3940
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Suresh Srinivas
Priority: Minor
 Attachments: HDFS-3940.patch


 Per HDFS-3936 it would be useful if GSet has a clear method so BM#close could 
 clear out the LightWeightGSet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3940) Add Gset#clear method

2013-04-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-3940:
--

Attachment: HDFS-3940.patch

Updated patch with some logs removed.

 Add Gset#clear method
 -

 Key: HDFS-3940
 URL: https://issues.apache.org/jira/browse/HDFS-3940
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Suresh Srinivas
Priority: Minor
 Attachments: HDFS-3940.patch, HDFS-3940.patch


 Per HDFS-3936 it would be useful if GSet has a clear method so BM#close could 
 clear out the LightWeightGSet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625882#comment-13625882
 ] 

Suresh Srinivas commented on HDFS-4670:
---

bq. This should give the user confidence that the project is usable and 
relatively current.
Can you describe the proposed changes better. I have hard time understanding 
what usable issues you are solving. Also please add information on what is not 
current?

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Attachments: HDFS-4670-0.patch


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and 
 relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4258) Rename of Being Written Files

2013-04-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625905#comment-13625905
 ] 

Daryn Sharp commented on HDFS-4258:
---

Sorry for the delay, I've been preoccupied.  There are security issues to 
consider: guessing a path won't bypass permission checks.  Whereas using an 
inode mapping will bypass permissions if the apis and implementation are not 
carefully considered.  I'll check out the referenced jira.

 Rename of Being Written Files
 -

 Key: HDFS-4258
 URL: https://issues.apache.org/jira/browse/HDFS-4258
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, namenode
Affects Versions: 3.0.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Brandon Li
 Attachments: HDFS-4258.patch, HDFS-4258.patch, HDFS-4258.patch, 
 HDFS-4258.patch


 When a being written file or it's ancestor directories is renamed, the path 
 in the file lease is also renamed.  Then the writer of the file usually will 
 fail since the file path in the writer is not updated.
 Moreover, I think there is a bug as follow:
 # Client writes 0's to F_0=/foo/file and writes 1's to F_1=/bar/file at 
 the same time.
 # Rename /bar to /baz
 # Rename /foo to /bar
 Then, writing to F_0 will fail since /foo/file does not exist anymore but 
 writing to F_1 may succeed since /bar/file exits as a different file.  In 
 such case, the content of /bar/file could be partly 0's and partly 1's.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4673) Renaming file in subdirectory of a snapshotted directory does not work.

2013-04-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-4673:
---

 Summary: Renaming file in subdirectory of a snapshotted directory 
does not work.
 Key: HDFS-4673
 URL: https://issues.apache.org/jira/browse/HDFS-4673
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: Snapshot (HDFS-2802)


Steps to repro:
# mkdir /1
# Allow snapshot on /1
# mkdir /1/2
# Put file /1/2/f1
# Take snapshot snap1 of /1
# Rename /1/2/f1 to /1/2/f2

Fails with exception in INodeDirectory.replaceSelf
{code}
  Preconditions.checkArgument(parent != null, parent is null, this=%s, 
this);
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4673) Renaming file in subdirectory of a snapshotted directory does not work.

2013-04-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4673:


Attachment: HDFS-4673.patch

When the source and destination directory are the same, unlinking the target 
file from the source directory renders dstIIP and dstParent invalid since the 
directory is replaced with an INodeDirectoryWithSnapshot.

The conditionally refreshes dstIIP and dstParent after unlinking the source 
file.

 Renaming file in subdirectory of a snapshotted directory does not work.
 ---

 Key: HDFS-4673
 URL: https://issues.apache.org/jira/browse/HDFS-4673
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4673.patch


 Steps to repro:
 # mkdir /1
 # Allow snapshot on /1
 # mkdir /1/2
 # Put file /1/2/f1
 # Take snapshot snap1 of /1
 # Rename /1/2/f1 to /1/2/f2
 Fails with exception in INodeDirectory.replaceSelf
 {code}
   Preconditions.checkArgument(parent != null, parent is null, this=%s, 
 this);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625913#comment-13625913
 ] 

Elliott Clark commented on HDFS-4670:
-

Mostly the styling isn't current.  The Web ui doesn't have good typography, and 
things are not pleasing to the eye.  This leads the user to think the web pages 
haven't seen any developer love in quite a while.

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Attachments: HDFS-4670-0.patch


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and 
 relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625918#comment-13625918
 ] 

Daryn Sharp commented on HDFS-4489:
---

I've only skimmed this jira, but a 9% increase is fairly substantial for large 
namespaces.  Are there any performance metrics available?

 Use InodeID as as an identifier of a file in HDFS protocols and APIs
 

 Key: HDFS-4489
 URL: https://issues.apache.org/jira/browse/HDFS-4489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Brandon Li
Assignee: Brandon Li

 The benefit of using InodeID to uniquely identify a file can be multiple 
 folds. Here are a few of them:
 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
 HDFS-4437.
 2. modification checks in tools like distcp. Since a file could have been 
 replaced or renamed to, the file name and size combination is no t reliable, 
 but the combination of file id and size is unique.
 3. id based protocol support (e.g., NFS)
 4. to make the pluggable block placement policy use fileid instead of 
 filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4434) Provide a mapping from INodeId to INode

2013-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625932#comment-13625932
 ] 

Hadoop QA commented on HDFS-4434:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12577621/HDFS-4434.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestCheckpoint

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4201//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4201//console

This message is automatically generated.

 Provide a mapping from INodeId to INode
 ---

 Key: HDFS-4434
 URL: https://issues.apache.org/jira/browse/HDFS-4434
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Suresh Srinivas
 Attachments: HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch


 This JIRA is to provide a way to access the INode via its id. The proposed 
 solution is to have an in-memory mapping from INodeId to INode. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4434) Provide a mapping from INodeId to INode

2013-04-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625937#comment-13625937
 ] 

Daryn Sharp commented on HDFS-4434:
---

I've only skimmed this patch, but my quick read is that resolvePath expands 
/.reserved/.inodes/NNN into the actual path, then runs the normal permission 
checks?  That's not quite how posix fds are handled, but I guess it's ok.

However...  What happens if I start probing inode numbers?  Can I find every 
path in the namespace, possibly via exceptions?

 Provide a mapping from INodeId to INode
 ---

 Key: HDFS-4434
 URL: https://issues.apache.org/jira/browse/HDFS-4434
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Suresh Srinivas
 Attachments: HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch


 This JIRA is to provide a way to access the INode via its id. The proposed 
 solution is to have an in-memory mapping from INodeId to INode. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4661) code style fixes suggested by Nicholas

2013-04-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625954#comment-13625954
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4661:
--

- Please remove getShortCircuitFdsForRead from findbugsExcludeFile.xml.

 code style fixes suggested by Nicholas
 --

 Key: HDFS-4661
 URL: https://issues.apache.org/jira/browse/HDFS-4661
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, hdfs-client, performance
Reporter: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-4661.001.patch, HDFS-4661.002.patch, 
 HDFS-4661.003.patch


 * The log statement in DataXceiver 
 BlockSender.ClientTraceLog.info(REQUEST_SHORT_CIRCUIT_FDS...) could be 
 cleaned up somewhat.
 * use {{FsDatasetSpi#getBlockInputStream}} and 
 {{FsDatasetSpi#getMetaDataInputStream}} rather than adding 
 {{FsDatasetSpi#getShortCircuitFdsForRead}}
 * {{FileInputStreamCache.Key.equals}}: use short-circuit boolean AND
 * In FileInputStreamCache.CacheCleaner, the code iter = 
 map.entries().iterator() can be removed with the same result since the 
 (previous) first element must be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4434) Provide a mapping from INodeId to INode

2013-04-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625958#comment-13625958
 ] 

Suresh Srinivas commented on HDFS-4434:
---

bq. However... What happens if I start probing inode numbers? Can I find every 
path in the namespace, possibly via exceptions?
Exception might print the regular path corresponding to a given inode ID. Do 
you see any issue with it?

 Provide a mapping from INodeId to INode
 ---

 Key: HDFS-4434
 URL: https://issues.apache.org/jira/browse/HDFS-4434
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Suresh Srinivas
 Attachments: HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch


 This JIRA is to provide a way to access the INode via its id. The proposed 
 solution is to have an in-memory mapping from INodeId to INode. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4673) Renaming file in subdirectory of a snapshotted directory does not work.

2013-04-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4673:


Attachment: HDFS-4673.patch

Missed fixing one path, done now (Thanks Jing for the catch!)

 Renaming file in subdirectory of a snapshotted directory does not work.
 ---

 Key: HDFS-4673
 URL: https://issues.apache.org/jira/browse/HDFS-4673
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4673.patch


 Steps to repro:
 # mkdir /1
 # Allow snapshot on /1
 # mkdir /1/2
 # Put file /1/2/f1
 # Take snapshot snap1 of /1
 # Rename /1/2/f1 to /1/2/f2
 Fails with exception in INodeDirectory.replaceSelf
 {code}
   Preconditions.checkArgument(parent != null, parent is null, this=%s, 
 this);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4673) Renaming file in subdirectory of a snapshotted directory does not work.

2013-04-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4673:


Attachment: (was: HDFS-4673.patch)

 Renaming file in subdirectory of a snapshotted directory does not work.
 ---

 Key: HDFS-4673
 URL: https://issues.apache.org/jira/browse/HDFS-4673
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4673.patch


 Steps to repro:
 # mkdir /1
 # Allow snapshot on /1
 # mkdir /1/2
 # Put file /1/2/f1
 # Take snapshot snap1 of /1
 # Rename /1/2/f1 to /1/2/f2
 Fails with exception in INodeDirectory.replaceSelf
 {code}
   Preconditions.checkArgument(parent != null, parent is null, this=%s, 
 this);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4673) Renaming file in subdirectory of a snapshotted directory does not work.

2013-04-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4673:


Attachment: HDFS-4673.2.patch

 Renaming file in subdirectory of a snapshotted directory does not work.
 ---

 Key: HDFS-4673
 URL: https://issues.apache.org/jira/browse/HDFS-4673
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4673.2.patch, HDFS-4673.patch


 Steps to repro:
 # mkdir /1
 # Allow snapshot on /1
 # mkdir /1/2
 # Put file /1/2/f1
 # Take snapshot snap1 of /1
 # Rename /1/2/f1 to /1/2/f2
 Fails with exception in INodeDirectory.replaceSelf
 {code}
   Preconditions.checkArgument(parent != null, parent is null, this=%s, 
 this);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625969#comment-13625969
 ] 

Suresh Srinivas commented on HDFS-4489:
---

bq. 9% increase is fairly substantial for large namespaces.
Please look at the overall increase in memory usage instead of increase over 
used memory. As I said that is close 2.6%.

bq. Are there any performance metrics available?
I do not see much concern here. In fact I removed the flag to turn this feature 
on or off. If you think based on the code this is a concern, I could add the 
flag back. What metrics would you like to see?

 Use InodeID as as an identifier of a file in HDFS protocols and APIs
 

 Key: HDFS-4489
 URL: https://issues.apache.org/jira/browse/HDFS-4489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Brandon Li
Assignee: Brandon Li

 The benefit of using InodeID to uniquely identify a file can be multiple 
 folds. Here are a few of them:
 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
 HDFS-4437.
 2. modification checks in tools like distcp. Since a file could have been 
 replaced or renamed to, the file name and size combination is no t reliable, 
 but the combination of file id and size is unique.
 3. id based protocol support (e.g., NFS)
 4. to make the pluggable block placement policy use fileid instead of 
 filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3940) Add Gset#clear method

2013-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625975#comment-13625975
 ] 

Hadoop QA commented on HDFS-3940:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12577632/HDFS-3940.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4202//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4202//console

This message is automatically generated.

 Add Gset#clear method
 -

 Key: HDFS-3940
 URL: https://issues.apache.org/jira/browse/HDFS-3940
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Suresh Srinivas
Priority: Minor
 Attachments: HDFS-3940.patch, HDFS-3940.patch


 Per HDFS-3936 it would be useful if GSet has a clear method so BM#close could 
 clear out the LightWeightGSet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4661) code style fixes suggested by Nicholas

2013-04-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4661:
---

Attachment: HDFS-4661.004.patch

 code style fixes suggested by Nicholas
 --

 Key: HDFS-4661
 URL: https://issues.apache.org/jira/browse/HDFS-4661
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, hdfs-client, performance
Reporter: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-4661.001.patch, HDFS-4661.002.patch, 
 HDFS-4661.003.patch, HDFS-4661.004.patch


 * The log statement in DataXceiver 
 BlockSender.ClientTraceLog.info(REQUEST_SHORT_CIRCUIT_FDS...) could be 
 cleaned up somewhat.
 * use {{FsDatasetSpi#getBlockInputStream}} and 
 {{FsDatasetSpi#getMetaDataInputStream}} rather than adding 
 {{FsDatasetSpi#getShortCircuitFdsForRead}}
 * {{FileInputStreamCache.Key.equals}}: use short-circuit boolean AND
 * In FileInputStreamCache.CacheCleaner, the code iter = 
 map.entries().iterator() can be removed with the same result since the 
 (previous) first element must be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4339) Persist inode id in fsimage and editlog

2013-04-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13625999#comment-13625999
 ] 

Suresh Srinivas commented on HDFS-4339:
---

I ran the test {{TestOfflineEditsViewer}} to ensure with updated editsStored 
the test passes.

 Persist inode id in fsimage and editlog
 ---

 Key: HDFS-4339
 URL: https://issues.apache.org/jira/browse/HDFS-4339
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: editsStored, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch


  Persist inode id in fsimage and editlog and update offline viewers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4339) Persist inode id in fsimage and editlog

2013-04-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4339:
--

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch to trunk. Thank you Brandon.

 Persist inode id in fsimage and editlog
 ---

 Key: HDFS-4339
 URL: https://issues.apache.org/jira/browse/HDFS-4339
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: editsStored, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch


  Persist inode id in fsimage and editlog and update offline viewers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4339) Persist inode id in fsimage and editlog

2013-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626006#comment-13626006
 ] 

Hudson commented on HDFS-4339:
--

Integrated in Hadoop-trunk-Commit #3577 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3577/])
HDFS-4339. Persist inode id in fsimage and editlog. Contributed by Brandon 
Li. (Revision 1465835)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1465835
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LayoutVersion.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSImageTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml


 Persist inode id in fsimage and editlog
 ---

 Key: HDFS-4339
 URL: https://issues.apache.org/jira/browse/HDFS-4339
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: editsStored, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch


  Persist inode id in fsimage and editlog and update offline viewers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4659) Support setting execution bit for regular files

2013-04-08 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4659:
-

Attachment: HDFS-4659.patch

Update the patch to move the logic of applying umask(0111) for regular file 
from Namenode to DFSClient. 

 Support setting execution bit for regular files
 ---

 Key: HDFS-4659
 URL: https://issues.apache.org/jira/browse/HDFS-4659
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4659.patch, HDFS-4659.patch, HDFS-4659.patch, 
 HDFS-4659.patch


 By default regular files are created with mode rw-r--r--, which is similar 
 as that on many UNIX platforms. However, setting execution bit for regular 
 files are not supported by HDFS. 
 It's the client's choice to set file access mode. HDFS would be easier to use 
 if it can support it, especially when HDFS is accessed by network file system 
 protocols. This JIRA is to track the change to support execution bit. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4674) TestBPOfferService fails on Windows due to failure parsing datanode data directory as URI

2013-04-08 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-4674:
---

 Summary: TestBPOfferService fails on Windows due to failure 
parsing datanode data directory as URI
 Key: HDFS-4674
 URL: https://issues.apache.org/jira/browse/HDFS-4674
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth


{{TestBPOfferService}} does not set {{dfs.datanode.data.dir}}.  When 
{{BPServiceActor}} starts, it attempts to use a thread name containing 
{{dfs.datanode.data.dir}} parsed as URI.  On Windows, this will not parse 
correctly due to presence of '\'.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3940) Add Gset#clear method

2013-04-08 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626031#comment-13626031
 ] 

Sanjay Radia commented on HDFS-3940:


+1

 Add Gset#clear method
 -

 Key: HDFS-3940
 URL: https://issues.apache.org/jira/browse/HDFS-3940
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Suresh Srinivas
Priority: Minor
 Attachments: HDFS-3940.patch, HDFS-3940.patch


 Per HDFS-3936 it would be useful if GSet has a clear method so BM#close could 
 clear out the LightWeightGSet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4674) TestBPOfferService fails on Windows due to failure parsing datanode data directory as URI

2013-04-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4674:


Attachment: HDFS-4674.1.patch

This patch sets {{dfs.datanode.data.dir}} to a sub-directory of 
{{test.build.data}}, like most of the HDFS tests.  I tested this successfully 
on Mac and Windows.

 TestBPOfferService fails on Windows due to failure parsing datanode data 
 directory as URI
 -

 Key: HDFS-4674
 URL: https://issues.apache.org/jira/browse/HDFS-4674
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4674.1.patch


 {{TestBPOfferService}} does not set {{dfs.datanode.data.dir}}.  When 
 {{BPServiceActor}} starts, it attempts to use a thread name containing 
 {{dfs.datanode.data.dir}} parsed as URI.  On Windows, this will not parse 
 correctly due to presence of '\'.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4339) Persist inode id in fsimage and editlog

2013-04-08 Thread Fengdong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626060#comment-13626060
 ] 

Fengdong Yu commented on HDFS-4339:
---

It's good to know merged into trunk.

 Persist inode id in fsimage and editlog
 ---

 Key: HDFS-4339
 URL: https://issues.apache.org/jira/browse/HDFS-4339
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: editsStored, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch, 
 HDFS-4339.patch, HDFS-4339.patch, HDFS-4339.patch


  Persist inode id in fsimage and editlog and update offline viewers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4674) TestBPOfferService fails on Windows due to failure parsing datanode data directory as URI

2013-04-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4674:


Status: Patch Available  (was: Open)

 TestBPOfferService fails on Windows due to failure parsing datanode data 
 directory as URI
 -

 Key: HDFS-4674
 URL: https://issues.apache.org/jira/browse/HDFS-4674
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4674.1.patch


 {{TestBPOfferService}} does not set {{dfs.datanode.data.dir}}.  When 
 {{BPServiceActor}} starts, it attempts to use a thread name containing 
 {{dfs.datanode.data.dir}} parsed as URI.  On Windows, this will not parse 
 correctly due to presence of '\'.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4675) Fix rename across snapshottable directories

2013-04-08 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-4675:
---

 Summary: Fix rename across snapshottable directories
 Key: HDFS-4675
 URL: https://issues.apache.org/jira/browse/HDFS-4675
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao


For rename across snapshottable directories, suppose there are two 
snapshottable directories: /user1 and /user2 and we have the following steps:

1. Take snapshot s1 on /user1 at time t1.
2. Take snapshot s2 on /user2 at time t2.
3. Take snapshot s3 on /user1 at time t3.
4. Rename /user2/foo/ (an INodeDirectoryWithSnapshot instance) to /user1/foo/.

After the rename we update the subtree of /user1/foo/ again (e.g., delete 
/user1/foo/bar), we need to decide where to record the diff. The problem is 
that the current implementation will identify s3 as the latest snapshot, thus 
recording the snapshot copy of bar to s3. However, the parent of bar, 
/user1/foo, is still in the created list of s3. Thus here we should record the 
snapshot copy of bar to s2.

If we further take snapshot s4 on /user1, and make some further change under 
/user1/foo, these changes will be recorded in s4. Then if we delete the 
snapshot s4, similar with above, we should merge the change to s2, not s3.

Thus in general, we may need to record the latest snapshots of both the src/dst 
subtree in the renamed inode and update the current 
INodeDirectory#getExistingINodeInPath accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4675) Fix rename across snapshottable directories

2013-04-08 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4675:


Attachment: HDFS-4675.000.patch

Initial patch. Also added 4 new unit tests.

 Fix rename across snapshottable directories
 ---

 Key: HDFS-4675
 URL: https://issues.apache.org/jira/browse/HDFS-4675
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-4675.000.patch


 For rename across snapshottable directories, suppose there are two 
 snapshottable directories: /user1 and /user2 and we have the following steps:
 1. Take snapshot s1 on /user1 at time t1.
 2. Take snapshot s2 on /user2 at time t2.
 3. Take snapshot s3 on /user1 at time t3.
 4. Rename /user2/foo/ (an INodeDirectoryWithSnapshot instance) to /user1/foo/.
 After the rename we update the subtree of /user1/foo/ again (e.g., delete 
 /user1/foo/bar), we need to decide where to record the diff. The problem is 
 that the current implementation will identify s3 as the latest snapshot, thus 
 recording the snapshot copy of bar to s3. However, the parent of bar, 
 /user1/foo, is still in the created list of s3. Thus here we should record 
 the snapshot copy of bar to s2.
 If we further take snapshot s4 on /user1, and make some further change under 
 /user1/foo, these changes will be recorded in s4. Then if we delete the 
 snapshot s4, similar with above, we should merge the change to s2, not s3.
 Thus in general, we may need to record the latest snapshots of both the 
 src/dst subtree in the renamed inode and update the current 
 INodeDirectory#getExistingINodeInPath accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3940) Add Gset#clear method and clear the block map when namenode is shutdown

2013-04-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-3940:
--

Summary: Add Gset#clear method and clear the block map when namenode is 
shutdown  (was: Add Gset#clear method)

 Add Gset#clear method and clear the block map when namenode is shutdown
 ---

 Key: HDFS-3940
 URL: https://issues.apache.org/jira/browse/HDFS-3940
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Suresh Srinivas
Priority: Minor
 Attachments: HDFS-3940.patch, HDFS-3940.patch


 Per HDFS-3936 it would be useful if GSet has a clear method so BM#close could 
 clear out the LightWeightGSet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4676) TestHDFSFileSystemContract should set MiniDFSCluster variable to null to free up memory

2013-04-08 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HDFS-4676:
-

 Summary: TestHDFSFileSystemContract should set MiniDFSCluster 
variable to null to free up memory
 Key: HDFS-4676
 URL: https://issues.apache.org/jira/browse/HDFS-4676
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Minor


TestHDFSFileSystemContract should reset the cluster member to null in order to  
make garbage collection quickly collect large chunk of memory associated with 
MiniDFSCluster. This avoids OutOfMemory errors.

See 
https://issues.apache.org/jira/browse/HDFS-4434?focusedCommentId=13624246page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13624246
 and the next jenkins tests where the OOM was fixed.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4676) TestHDFSFileSystemContract should set MiniDFSCluster variable to null to free up memory

2013-04-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4676:
--

Status: Patch Available  (was: Open)

 TestHDFSFileSystemContract should set MiniDFSCluster variable to null to free 
 up memory
 ---

 Key: HDFS-4676
 URL: https://issues.apache.org/jira/browse/HDFS-4676
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Minor

 TestHDFSFileSystemContract should reset the cluster member to null in order 
 to  make garbage collection quickly collect large chunk of memory associated 
 with MiniDFSCluster. This avoids OutOfMemory errors.
 See 
 https://issues.apache.org/jira/browse/HDFS-4434?focusedCommentId=13624246page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13624246
  and the next jenkins tests where the OOM was fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4676) TestHDFSFileSystemContract should set MiniDFSCluster variable to null to free up memory

2013-04-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4676:
--

Status: Open  (was: Patch Available)

 TestHDFSFileSystemContract should set MiniDFSCluster variable to null to free 
 up memory
 ---

 Key: HDFS-4676
 URL: https://issues.apache.org/jira/browse/HDFS-4676
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Minor

 TestHDFSFileSystemContract should reset the cluster member to null in order 
 to  make garbage collection quickly collect large chunk of memory associated 
 with MiniDFSCluster. This avoids OutOfMemory errors.
 See 
 https://issues.apache.org/jira/browse/HDFS-4434?focusedCommentId=13624246page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13624246
  and the next jenkins tests where the OOM was fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Fengdong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626081#comment-13626081
 ] 

Fengdong Yu commented on HDFS-4670:
---

I just continue with Suresh's comments:

Can I think this patch just improve some web UI style? and currently, only for 
Namenode's web UI, Jobtracker's web UI is on schedule?

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Attachments: HDFS-4670-0.patch


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and 
 relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4676) TestHDFSFileSystemContract should set MiniDFSCluster variable to null to free up memory

2013-04-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4676:
--

Attachment: HDFS-4676.patch

Here is a simple patch.

 TestHDFSFileSystemContract should set MiniDFSCluster variable to null to free 
 up memory
 ---

 Key: HDFS-4676
 URL: https://issues.apache.org/jira/browse/HDFS-4676
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Minor
 Attachments: HDFS-4676.patch


 TestHDFSFileSystemContract should reset the cluster member to null in order 
 to  make garbage collection quickly collect large chunk of memory associated 
 with MiniDFSCluster. This avoids OutOfMemory errors.
 See 
 https://issues.apache.org/jira/browse/HDFS-4434?focusedCommentId=13624246page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13624246
  and the next jenkins tests where the OOM was fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4676) TestHDFSFileSystemContract should set MiniDFSCluster variable to null to free up memory

2013-04-08 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626085#comment-13626085
 ] 

Sanjay Radia commented on HDFS-4676:


+1

 TestHDFSFileSystemContract should set MiniDFSCluster variable to null to free 
 up memory
 ---

 Key: HDFS-4676
 URL: https://issues.apache.org/jira/browse/HDFS-4676
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Minor
 Attachments: HDFS-4676.patch


 TestHDFSFileSystemContract should reset the cluster member to null in order 
 to  make garbage collection quickly collect large chunk of memory associated 
 with MiniDFSCluster. This avoids OutOfMemory errors.
 See 
 https://issues.apache.org/jira/browse/HDFS-4434?focusedCommentId=13624246page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13624246
  and the next jenkins tests where the OOM was fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626091#comment-13626091
 ] 

Jakob Homan commented on HDFS-4670:
---

Can you post some screenshots of what the new ui looks like?

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Attachments: HDFS-4670-0.patch


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and 
 relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3940) Add Gset#clear method and clear the block map when namenode is shutdown

2013-04-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626092#comment-13626092
 ] 

Hudson commented on HDFS-3940:
--

Integrated in Hadoop-trunk-Commit #3578 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3578/])
HDFS-3940. Add Gset#clear method and clear the block map when namenode is 
shutdown. Contributed by Suresh Srinivas. (Revision 1465851)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1465851
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlocksMap.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/GSet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/GSetByHashMap.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/LightWeightGSet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestGSet.java


 Add Gset#clear method and clear the block map when namenode is shutdown
 ---

 Key: HDFS-3940
 URL: https://issues.apache.org/jira/browse/HDFS-3940
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Suresh Srinivas
Priority: Minor
 Attachments: HDFS-3940.patch, HDFS-3940.patch


 Per HDFS-3936 it would be useful if GSet has a clear method so BM#close could 
 clear out the LightWeightGSet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4659) Support setting execution bit for regular files

2013-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626112#comment-13626112
 ] 

Hadoop QA commented on HDFS-4659:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12577660/HDFS-4659.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDFSPermission
  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4203//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4203//console

This message is automatically generated.

 Support setting execution bit for regular files
 ---

 Key: HDFS-4659
 URL: https://issues.apache.org/jira/browse/HDFS-4659
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4659.patch, HDFS-4659.patch, HDFS-4659.patch, 
 HDFS-4659.patch


 By default regular files are created with mode rw-r--r--, which is similar 
 as that on many UNIX platforms. However, setting execution bit for regular 
 files are not supported by HDFS. 
 It's the client's choice to set file access mode. HDFS would be easier to use 
 if it can support it, especially when HDFS is accessed by network file system 
 protocols. This JIRA is to track the change to support execution bit. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4434) Provide a mapping from INodeId to INode

2013-04-08 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626123#comment-13626123
 ] 

Sanjay Radia commented on HDFS-4434:


bq. What happens if I start probing inode numbers? Can I find every path in the 
namespace, possibly ...
Daryn, since the normal permission check is run on the full path, the 
resolution will fail at the same place it would have failed otherwise. That is, 
if you don't have x-perm on a dir /secretDir/ then the resolution will fail 
exactly at /secretDir and the exception will not give you any an additional 
info *as long as the exception does not returns the full path* (e.g. 
/secretDir/a/b/c ).

 Provide a mapping from INodeId to INode
 ---

 Key: HDFS-4434
 URL: https://issues.apache.org/jira/browse/HDFS-4434
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Suresh Srinivas
 Attachments: HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, HDFS-4434.patch, 
 HDFS-4434.patch, HDFS-4434.patch


 This JIRA is to provide a way to access the INode via its id. The proposed 
 solution is to have an in-memory mapping from INodeId to INode. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4669) org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM java

2013-04-08 Thread Tian Hong Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tian Hong Wang updated HDFS-4669:
-

Target Version/s: 2.0.5-beta  (was: 2.0.3-alpha)

 org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager fails using IBM 
 java
 

 Key: HDFS-4669
 URL: https://issues.apache.org/jira/browse/HDFS-4669
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Tian Hong Wang
  Labels: patch
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-4669.patch


 TestBlockPoolManager unit test fails with the following error message using 
 IBM java:
 testFederationRefresh(org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager)
   Time elapsed: 27 sec   FAILURE!
 org.junit.ComparisonFailure: expected:stop #[1
 refresh #2]
  but was:stop #[2
 refresh #1]
 
 The root cause is:
 (1)if we want to remove the first NS, keep the second NS, it should be 
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns2), not 
 conf.set(DFSConfigKeys.DFS_NAMESERVICES, ns1).
 (2)Since HashMap  HashSet store the data in the random order way, so in ibm 
 java  Oracle java, HashMap get the random order key, value that causing 
 the random ns1ns2 value.  So in the code, it should use LinkedHashMap  
 LinkedHashSet to keep the original order.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3934) duplicative dfs_hosts entries handled wrong

2013-04-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3934:
---

Attachment: HDFS-3934.001.patch

This patch resolves the entries in the hosts files to their first IP addresses 
before de-duplicating everything.  when creating a new {{DatanodeInfo}} for 
them, it uses {{getCanonicalAddress}}.

This fixes the invisible node problem where due to lacking a hostname, the 
NameNode web UI would show an entry like {{:50010}} in its lists (note missing 
hostname).

It also fixes the problem where we put two hostnames which refer to the same IP 
address in a host file, or a hostname and an IP which both turn out to map to 
the same hostname.

 duplicative dfs_hosts entries handled wrong
 ---

 Key: HDFS-3934
 URL: https://issues.apache.org/jira/browse/HDFS-3934
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Andy Isaacson
Assignee: Andy Isaacson
Priority: Minor
 Attachments: HDFS-3934.001.patch


 A dead DN listed in dfs_hosts_allow.txt by IP and in dfs_hosts_exclude.txt by 
 hostname ends up being displayed twice in {{dfsnodelist.jsp?whatNodes=DEAD}} 
 after the NN restarts because {{getDatanodeListForReport}} does not handle 
 such a pseudo-duplicate correctly:
 # the Remove any nodes we know about from the map loop no longer has the 
 knowledge to remove the spurious entries
 # the The remaining nodes are ones that are referenced by the hosts files 
 loop does not do hostname lookups, so does not know that the IP and hostname 
 refer to the same host.
 Relatedly, such an IP-based dfs_hosts entry results in a cosmetic problem in 
 the JSP output:  The *Node* column shows :50010 as the nodename, with HTML 
 markup {{a 
 href=http://:50075/browseDirectory.jsp?namenodeInfoPort=50070amp;dir=%2Famp;nnaddr=172.29.97.196:8020;
  title=172.29.97.216:50010:50010/a}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626153#comment-13626153
 ] 

Elliott Clark commented on HDFS-4670:
-

This patch has all of the HDFS web ui's (datanode, namenode, and qjm) styled.

bq.Can you post some screenshots of what the new ui looks like?
Sure, I'll post some screenshots with the next version of the patch that passes 
the failed tests.

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Attachments: HDFS-4670-0.patch


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and 
 relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4300) TransferFsImage.downloadEditsToStorage should use a tmp file for destination

2013-04-08 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4300:
--

Attachment: hdfs-4300-1.patch

Patch attached which does Todd's suggestion. I looked through the logs of the 
included test case, and saw that it correctly re-downloaded the truncated tmp 
file.

 TransferFsImage.downloadEditsToStorage should use a tmp file for destination
 

 Key: HDFS-4300
 URL: https://issues.apache.org/jira/browse/HDFS-4300
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Critical
 Attachments: hdfs-4300-1.patch


 Currently, in TransferFsImage.downloadEditsToStorage, we download the edits 
 file directly to its finalized path. So, if the transfer fails in the middle, 
 a half-written file is left and cannot be distinguished from a correct file. 
 So, future checkpoints by the 2NN will fail, since the file is truncated in 
 the middle -- but it won't ever download a good copy because it thinks it 
 already has the proper file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4300) TransferFsImage.downloadEditsToStorage should use a tmp file for destination

2013-04-08 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4300:
--

Status: Patch Available  (was: Open)

 TransferFsImage.downloadEditsToStorage should use a tmp file for destination
 

 Key: HDFS-4300
 URL: https://issues.apache.org/jira/browse/HDFS-4300
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Critical
 Attachments: hdfs-4300-1.patch


 Currently, in TransferFsImage.downloadEditsToStorage, we download the edits 
 file directly to its finalized path. So, if the transfer fails in the middle, 
 a half-written file is left and cannot be distinguished from a correct file. 
 So, future checkpoints by the 2NN will fail, since the file is truncated in 
 the middle -- but it won't ever download a good copy because it thinks it 
 already has the proper file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4674) TestBPOfferService fails on Windows due to failure parsing datanode data directory as URI

2013-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626165#comment-13626165
 ] 

Hadoop QA commented on HDFS-4674:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12577668/HDFS-4674.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4204//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4204//console

This message is automatically generated.

 TestBPOfferService fails on Windows due to failure parsing datanode data 
 directory as URI
 -

 Key: HDFS-4674
 URL: https://issues.apache.org/jira/browse/HDFS-4674
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4674.1.patch


 {{TestBPOfferService}} does not set {{dfs.datanode.data.dir}}.  When 
 {{BPServiceActor}} starts, it attempts to use a thread name containing 
 {{dfs.datanode.data.dir}} parsed as URI.  On Windows, this will not parse 
 correctly due to presence of '\'.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Fengdong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengdong Yu updated HDFS-4670:
--

Affects Version/s: 2.0.3-alpha

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Attachments: HDFS-4670-0.patch


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and 
 relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HDFS-4670:


Attachment: Hadoop NameNode.png
hdfs_browser.png
Hadoop JournalNode.png

Screenshots

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Attachments: Hadoop JournalNode.png, Hadoop NameNode.png, 
 HDFS-4670-0.patch, HDFS-4670-1.patch, hdfs_browser.png


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and 
 relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HDFS-4670:


Attachment: HDFS-4670-1.patch

Updated patch to pass tests and findbugs.

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Attachments: Hadoop JournalNode.png, Hadoop NameNode.png, 
 HDFS-4670-0.patch, HDFS-4670-1.patch, hdfs_browser.png


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and 
 relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3934) duplicative dfs_hosts entries handled wrong

2013-04-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3934:
---

Attachment: HDFS-3934.002.patch

* fix test that relies on unresolvable hostname

* fix behavior for hosts files that have lines of the form entry:port 
rather than just entry

 duplicative dfs_hosts entries handled wrong
 ---

 Key: HDFS-3934
 URL: https://issues.apache.org/jira/browse/HDFS-3934
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Andy Isaacson
Assignee: Andy Isaacson
Priority: Minor
 Attachments: HDFS-3934.001.patch, HDFS-3934.002.patch


 A dead DN listed in dfs_hosts_allow.txt by IP and in dfs_hosts_exclude.txt by 
 hostname ends up being displayed twice in {{dfsnodelist.jsp?whatNodes=DEAD}} 
 after the NN restarts because {{getDatanodeListForReport}} does not handle 
 such a pseudo-duplicate correctly:
 # the Remove any nodes we know about from the map loop no longer has the 
 knowledge to remove the spurious entries
 # the The remaining nodes are ones that are referenced by the hosts files 
 loop does not do hostname lookups, so does not know that the IP and hostname 
 refer to the same host.
 Relatedly, such an IP-based dfs_hosts entry results in a cosmetic problem in 
 the JSP output:  The *Node* column shows :50010 as the nodename, with HTML 
 markup {{a 
 href=http://:50075/browseDirectory.jsp?namenodeInfoPort=50070amp;dir=%2Famp;nnaddr=172.29.97.196:8020;
  title=172.29.97.216:50010:50010/a}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3934) duplicative dfs_hosts entries handled wrong

2013-04-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3934:
---

Status: Patch Available  (was: Open)

 duplicative dfs_hosts entries handled wrong
 ---

 Key: HDFS-3934
 URL: https://issues.apache.org/jira/browse/HDFS-3934
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Andy Isaacson
Assignee: Andy Isaacson
Priority: Minor
 Attachments: HDFS-3934.001.patch, HDFS-3934.002.patch


 A dead DN listed in dfs_hosts_allow.txt by IP and in dfs_hosts_exclude.txt by 
 hostname ends up being displayed twice in {{dfsnodelist.jsp?whatNodes=DEAD}} 
 after the NN restarts because {{getDatanodeListForReport}} does not handle 
 such a pseudo-duplicate correctly:
 # the Remove any nodes we know about from the map loop no longer has the 
 knowledge to remove the spurious entries
 # the The remaining nodes are ones that are referenced by the hosts files 
 loop does not do hostname lookups, so does not know that the IP and hostname 
 refer to the same host.
 Relatedly, such an IP-based dfs_hosts entry results in a cosmetic problem in 
 the JSP output:  The *Node* column shows :50010 as the nodename, with HTML 
 markup {{a 
 href=http://:50075/browseDirectory.jsp?namenodeInfoPort=50070amp;dir=%2Famp;nnaddr=172.29.97.196:8020;
  title=172.29.97.216:50010:50010/a}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Fengdong Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626181#comment-13626181
 ] 

Fengdong Yu commented on HDFS-4670:
---

Thanks for upload screenshots, but it seems that you don't enable HA in your 
test cluster, so can you enable HA with QJM or NSF HA, then upload a new 
screenshots?
 

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Attachments: Hadoop JournalNode.png, Hadoop NameNode.png, 
 HDFS-4670-0.patch, HDFS-4670-1.patch, hdfs_browser.png


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and 
 relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4672) Support tiered storage policies

2013-04-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626183#comment-13626183
 ] 

Colin Patrick McCabe commented on HDFS-4672:


Thanks for thinking about this, Andrew.  This will be a nice feature to have in 
the future.

It would be particularly interesting if people could use both flash and hard 
disks in the same cluster.  Perhaps the flash could be used for HBase-backed 
storage, and the hard disks for everything else, for example.

The xattr idea sounds like the right way to go for when you know what tier you 
want to put something in.  I feel like we might also want to enable automatic 
migration between tiers, at least for some files.  I suppose this could also be 
done outside HDFS, with a daemon that looks at file access times (atimes) and 
attaches the correct xattrs.  However, traditional hierarchical storage 
management (HSM) systems integrate this into the filesystem itself, so we may 
want to consider this.

This would also allow us to consider other features like compressing 
infrequently-used data.

 Support tiered storage policies
 ---

 Key: HDFS-4672
 URL: https://issues.apache.org/jira/browse/HDFS-4672
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, libhdfs, namenode
Reporter: Andrew Purtell

 We would like to be able to create certain files on certain storage device 
 classes (e.g. spinning media, solid state devices, RAM disk, non-volatile 
 memory). HDFS-2832 enables heterogeneous storage at the DataNode, so the 
 NameNode can gain awareness of what different storage options are available 
 in the pool and where they are located, but no API is provided for clients or 
 block placement plugins to perform device aware block placement. We would 
 like to propose a set of extensions that also have broad applicability to use 
 cases where storage device affinity is important:
  
 - Add an enum of generic storage device classes, borrowing from current 
 taxonomy of the storage industry
  
 - Augment DataNode volume metadata in storage reports with this enum
  
 - Extend the namespace so pluggable block policies can be specified on a 
 directory and storage device class can be tracked in the Inode. Perhaps this 
 could be a larger discussion on adding support for extended attributes in the 
 HDFS namespace. The Inode should track both the storage device class hint and 
 the current actual storage device class. FileStatus should expose this 
 information (or xattrs in general) to clients.
  
 - Extend the pluggable block policy framework so policies can also consider, 
 and specify, affinity for a particular storage device class
  
 - Extend the file creation API to accept a storage device class affinity 
 hint. Such a hint can be supplied directly as a parameter, or, if we are 
 considering extended attribute support, then instead as one of a set of 
 xattrs. The hint would be stored in the namespace and also used by the client 
 to indicate to the NameNode/block placement policy/DataNode constraints on 
 block placement. Furthermore, if xattrs or device storage class affinity 
 hints are associated with directories, then the NameNode should provide the 
 storage device affinity hint to the client in the create API response, so the 
 client can provide the appropriate hint to DataNodes when writing new blocks.
  
 - The list of candidate DataNodes for new blocks supplied by the NameNode to 
 clients should be weighted/sorted by availability of the desired storage 
 device class. 
  
 - Block replication should consider storage device affinity hints. If a 
 client move()s a file from a location under a path with affinity hint X to 
 under a path with affinity hint Y, then all blocks currently residing on 
 media X should be eventually replicated onto media Y with the then excess 
 replicas on media X deleted.
  
 - Introduce the concept of degraded path: a path can be degraded if a block 
 placement policy is forced to abandon a constraint in order to persist the 
 block, when there may not be available space on the desired device class, or 
 to maintain the minimum necessary replication factor. This concept is 
 distinct from the corrupt path, where one or more blocks are missing. Paths 
 in degraded state should be periodically reevaluated for re-replication.
  
 - The FSShell should be extended with commands for changing the storage 
 device class hint for a directory or file. 
  
 - Clients like DistCP which compare metadata should be extended to be aware 
 of the storage device class hint. For DistCP specifically, there should be an 
 option to ignore the storage device class hints, enabled by default.
  
 Suggested semantics:
  
 - The default storage device class should be the null class, or simply 

[jira] [Commented] (HDFS-4300) TransferFsImage.downloadEditsToStorage should use a tmp file for destination

2013-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626206#comment-13626206
 ] 

Hadoop QA commented on HDFS-4300:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12577683/hdfs-4300-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4205//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4205//console

This message is automatically generated.

 TransferFsImage.downloadEditsToStorage should use a tmp file for destination
 

 Key: HDFS-4300
 URL: https://issues.apache.org/jira/browse/HDFS-4300
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Andrew Wang
Priority: Critical
 Attachments: hdfs-4300-1.patch


 Currently, in TransferFsImage.downloadEditsToStorage, we download the edits 
 file directly to its finalized path. So, if the transfer fails in the middle, 
 a half-written file is left and cannot be distinguished from a correct file. 
 So, future checkpoints by the 2NN will fail, since the file is truncated in 
 the middle -- but it won't ever download a good copy because it thinks it 
 already has the proper file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4659) Support setting execution bit for regular files

2013-04-08 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626224#comment-13626224
 ] 

Brandon Li commented on HDFS-4659:
--

The unit tests passed in my local test.

 Support setting execution bit for regular files
 ---

 Key: HDFS-4659
 URL: https://issues.apache.org/jira/browse/HDFS-4659
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4659.patch, HDFS-4659.patch, HDFS-4659.patch, 
 HDFS-4659.patch


 By default regular files are created with mode rw-r--r--, which is similar 
 as that on many UNIX platforms. However, setting execution bit for regular 
 files are not supported by HDFS. 
 It's the client's choice to set file access mode. HDFS would be easier to use 
 if it can support it, especially when HDFS is accessed by network file system 
 protocols. This JIRA is to track the change to support execution bit. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4659) Support setting execution bit for regular files

2013-04-08 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4659:
-

Status: Open  (was: Patch Available)

cancel and then resubmit the patch to trigger the testing again.

 Support setting execution bit for regular files
 ---

 Key: HDFS-4659
 URL: https://issues.apache.org/jira/browse/HDFS-4659
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4659.patch, HDFS-4659.patch, HDFS-4659.patch, 
 HDFS-4659.patch


 By default regular files are created with mode rw-r--r--, which is similar 
 as that on many UNIX platforms. However, setting execution bit for regular 
 files are not supported by HDFS. 
 It's the client's choice to set file access mode. HDFS would be easier to use 
 if it can support it, especially when HDFS is accessed by network file system 
 protocols. This JIRA is to track the change to support execution bit. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4659) Support setting execution bit for regular files

2013-04-08 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4659:
-

Status: Patch Available  (was: Open)

 Support setting execution bit for regular files
 ---

 Key: HDFS-4659
 URL: https://issues.apache.org/jira/browse/HDFS-4659
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4659.patch, HDFS-4659.patch, HDFS-4659.patch, 
 HDFS-4659.patch


 By default regular files are created with mode rw-r--r--, which is similar 
 as that on many UNIX platforms. However, setting execution bit for regular 
 files are not supported by HDFS. 
 It's the client's choice to set file access mode. HDFS would be easier to use 
 if it can support it, especially when HDFS is accessed by network file system 
 protocols. This JIRA is to track the change to support execution bit. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3934) duplicative dfs_hosts entries handled wrong

2013-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626234#comment-13626234
 ] 

Hadoop QA commented on HDFS-3934:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12577696/HDFS-3934.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4207//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4207//console

This message is automatically generated.

 duplicative dfs_hosts entries handled wrong
 ---

 Key: HDFS-3934
 URL: https://issues.apache.org/jira/browse/HDFS-3934
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Andy Isaacson
Assignee: Andy Isaacson
Priority: Minor
 Attachments: HDFS-3934.001.patch, HDFS-3934.002.patch


 A dead DN listed in dfs_hosts_allow.txt by IP and in dfs_hosts_exclude.txt by 
 hostname ends up being displayed twice in {{dfsnodelist.jsp?whatNodes=DEAD}} 
 after the NN restarts because {{getDatanodeListForReport}} does not handle 
 such a pseudo-duplicate correctly:
 # the Remove any nodes we know about from the map loop no longer has the 
 knowledge to remove the spurious entries
 # the The remaining nodes are ones that are referenced by the hosts files 
 loop does not do hostname lookups, so does not know that the IP and hostname 
 refer to the same host.
 Relatedly, such an IP-based dfs_hosts entry results in a cosmetic problem in 
 the JSP output:  The *Node* column shows :50010 as the nodename, with HTML 
 markup {{a 
 href=http://:50075/browseDirectory.jsp?namenodeInfoPort=50070amp;dir=%2Famp;nnaddr=172.29.97.196:8020;
  title=172.29.97.216:50010:50010/a}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626236#comment-13626236
 ] 

Hadoop QA commented on HDFS-4670:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12577688/HDFS-4670-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4206//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4206//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4206//console

This message is automatically generated.

 Style Hadoop HDFS web ui's with Twitter's bootstrap.
 

 Key: HDFS-4670
 URL: https://issues.apache.org/jira/browse/HDFS-4670
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Minor
 Attachments: Hadoop JournalNode.png, Hadoop NameNode.png, 
 HDFS-4670-0.patch, HDFS-4670-1.patch, hdfs_browser.png


 A users' first experience of Apache Hadoop is often looking at the web ui.  
 This should give the user confidence that the project is usable and 
 relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira