[jira] [Commented] (HDFS-3831) Failure to renew tokens due to test-sources left in classpath

2012-09-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465577#comment-13465577
 ] 

Hudson commented on HDFS-3831:
--

Integrated in Hadoop-Hdfs-0.23-Build #388 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/388/])
svn merge -c 1391121 FIXES: HDFS-3831. Failure to renew tokens due to 
test-sources left in classpath (jlowe via bobby) (Revision 1391127)

 Result = UNSTABLE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1391127
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/FakeRenewer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestDelegationTokenFetcher.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer


 Failure to renew tokens due to test-sources left in classpath
 -

 Key: HDFS-3831
 URL: https://issues.apache.org/jira/browse/HDFS-3831
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Fix For: 0.23.4, 3.0.0, 2.0.3-alpha

 Attachments: HDFS-3831.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3731) 2.0 release upgrade must handle blocks being written from 1.0

2012-09-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465573#comment-13465573
 ] 

Hudson commented on HDFS-3731:
--

Integrated in Hadoop-Hdfs-0.23-Build #388 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/388/])
HDFS-3731. 2.0 release upgrade must handle blocks being written from 1.0 
(Kihwal Lee via daryn) (Revision 1391155)

 Result = UNSTABLE
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1391155
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestDistributedUpgrade.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop1-bbw.tgz


 2.0 release upgrade must handle blocks being written from 1.0
 -

 Key: HDFS-3731
 URL: https://issues.apache.org/jira/browse/HDFS-3731
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Kihwal Lee
Priority: Blocker
 Fix For: 0.23.4, 2.0.2-alpha

 Attachments: hadoop1-bbw.tgz, HDFS-3731.002.patch, 
 HDFS-3731.003.patch, hdfs-3731.branch-023.patch.txt


 Release 2.0 upgrades must handle blocks being written to (bbw) files from 1.0 
 release. Problem reported by Brahma Reddy.
 The {{DataNode}} will only have one block pool after upgrading from a 1.x 
 release.  (This is because in the 1.x releases, there were no block pools-- 
 or equivalently, everything was in the same block pool).  During the upgrade, 
 we should hardlink the block files from the {{blocksBeingWritten}} directory 
 into the {{rbw}} directory of this block pool.  Similarly, on {{-finalize}}, 
 we should delete the {{blocksBeingWritten}} directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3922) 0.22 and 0.23 namenode throws away blocks under construction on restart

2012-09-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465578#comment-13465578
 ] 

Hudson commented on HDFS-3922:
--

Integrated in Hadoop-Hdfs-0.23-Build #388 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/388/])
HDFS-3922. namenode throws away blocks under construction on restart 
(Kihwal Lee via daryn) (Revision 1391150)

 Result = UNSTABLE
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1391150
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java


 0.22 and 0.23 namenode throws away blocks under construction on restart
 ---

 Key: HDFS-3922
 URL: https://issues.apache.org/jira/browse/HDFS-3922
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.22.1, 0.23.3
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Fix For: 0.23.4

 Attachments: hdfs-3922.branch-023.patch.txt


 When reading edits on startup, namenode may throw away blocks under 
 construction. This is because the file inode is turned into a under 
 construction one, but nothing is done to the last block. 
 With append/hsync, this is not acceptable because it may drop sync'ed partial 
 blocks.  In branch 2 and trunk, HDFS-1623 (HA) fixed this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3831) Failure to renew tokens due to test-sources left in classpath

2012-09-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465585#comment-13465585
 ] 

Hudson commented on HDFS-3831:
--

Integrated in Hadoop-Hdfs-trunk #1179 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1179/])
HDFS-3831. Failure to renew tokens due to test-sources left in classpath 
(jlowe via bobby) (Revision 1391121)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1391121
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/FakeRenewer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestDelegationTokenFetcher.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer


 Failure to renew tokens due to test-sources left in classpath
 -

 Key: HDFS-3831
 URL: https://issues.apache.org/jira/browse/HDFS-3831
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Fix For: 0.23.4, 3.0.0, 2.0.3-alpha

 Attachments: HDFS-3831.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-3990) NN's health report has severe performance problems

2012-09-28 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-3990:
-

 Summary: NN's health report has severe performance problems
 Key: HDFS-3990
 URL: https://issues.apache.org/jira/browse/HDFS-3990
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 2.0.0-alpha, 0.23.0, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical


The dfshealth page will place a read lock on the namespace while it does a dns 
lookup for every DN.  On a multi-thousand node cluster, this often results in 
10s+ load time for the health page.  10 concurrent requests were found to cause 
7m+ load times during which time write operations blocked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3990) NN's health report has severe performance problems

2012-09-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465597#comment-13465597
 ] 

Daryn Sharp commented on HDFS-3990:
---

Enabling a nscd host cache helped mitigate the issue by reducing load times to 
a few seconds.  However the namespace read lock is highly undesirable, and the 
repeated dns lookups are questionable.

 NN's health report has severe performance problems
 --

 Key: HDFS-3990
 URL: https://issues.apache.org/jira/browse/HDFS-3990
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical

 The dfshealth page will place a read lock on the namespace while it does a 
 dns lookup for every DN.  On a multi-thousand node cluster, this often 
 results in 10s+ load time for the health page.  10 concurrent requests were 
 found to cause 7m+ load times during which time write operations blocked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3990) NN's health report has severe performance problems

2012-09-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465598#comment-13465598
 ] 

Daryn Sharp commented on HDFS-3990:
---

Arun, please update the target version if you want to defer the fix to a later 
2.x release.

 NN's health report has severe performance problems
 --

 Key: HDFS-3990
 URL: https://issues.apache.org/jira/browse/HDFS-3990
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical

 The dfshealth page will place a read lock on the namespace while it does a 
 dns lookup for every DN.  On a multi-thousand node cluster, this often 
 results in 10s+ load time for the health page.  10 concurrent requests were 
 found to cause 7m+ load times during which time write operations blocked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1276) Add failed volume info to dfsadmin report

2012-09-28 Thread Bertrand Dechoux (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465603#comment-13465603
 ] 

Bertrand Dechoux commented on HDFS-1276:


One question about the need :
* Wouldn't it be possible to have the same as the HDFS web interface ie the 
list of namenode storage and their status? If I understand correctly, this 
patch will display (if any) failed volumes of datanode (and maybe namenode).

Second, about the patch :
* use  instead of new String[0]
* The comment or the code about parsing the error message is wrong
= the error message is : DataNode failed 
volumes:failed_volume;failed_volume...
= but the code split twice on ':', shouldn't it be first on ':' and then on 
';' ?
* Isn't that a very fragile way of reporting error? People might use (and so 
will use) the report output to do monitoring.

 Add failed volume info to dfsadmin report
 -

 Key: HDFS-1276
 URL: https://issues.apache.org/jira/browse/HDFS-1276
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.21.0
Reporter: Jeff Zhang
Assignee: Jeff Zhang
Priority: Minor
 Attachments: HDFS_1276.patch


 Currently, users do not know which volumes are failed unless he looks into 
 the logs, this way is not convenient for users. I plan to put the failed 
 volumes in the report of HDFS. Then hdfs administers can use command 
 bin/hadoop dfsadmin -report to find which volumes are failed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1276) Add failed volume info to dfsadmin report

2012-09-28 Thread Bertrand Dechoux (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465609#comment-13465609
 ] 

Bertrand Dechoux commented on HDFS-1276:


Nevermind ( use  instead of new String[0] ), I am obviously tired.

 Add failed volume info to dfsadmin report
 -

 Key: HDFS-1276
 URL: https://issues.apache.org/jira/browse/HDFS-1276
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.21.0
Reporter: Jeff Zhang
Assignee: Jeff Zhang
Priority: Minor
 Attachments: HDFS_1276.patch


 Currently, users do not know which volumes are failed unless he looks into 
 the logs, this way is not convenient for users. I plan to put the failed 
 volumes in the report of HDFS. Then hdfs administers can use command 
 bin/hadoop dfsadmin -report to find which volumes are failed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3831) Failure to renew tokens due to test-sources left in classpath

2012-09-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465616#comment-13465616
 ] 

Hudson commented on HDFS-3831:
--

Integrated in Hadoop-Mapreduce-trunk #1210 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1210/])
HDFS-3831. Failure to renew tokens due to test-sources left in classpath 
(jlowe via bobby) (Revision 1391121)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1391121
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/FakeRenewer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestDelegationTokenFetcher.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer


 Failure to renew tokens due to test-sources left in classpath
 -

 Key: HDFS-3831
 URL: https://issues.apache.org/jira/browse/HDFS-3831
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Fix For: 0.23.4, 3.0.0, 2.0.3-alpha

 Attachments: HDFS-3831.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3373) FileContext HDFS implementation can leak socket caches

2012-09-28 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465670#comment-13465670
 ] 

Robert Joseph Evans commented on HDFS-3373:
---

The 0.23 patch looks like a fairly straight forward port of the trunk version, 
but what happened to 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSocketCache.java?

 FileContext HDFS implementation can leak socket caches
 --

 Key: HDFS-3373
 URL: https://issues.apache.org/jira/browse/HDFS-3373
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Assignee: John George
 Fix For: 2.0.3-alpha

 Attachments: HDFS-3373.branch-23.patch, HDFS-3373.branch23.patch, 
 HDFS-3373.trunk.patch, HDFS-3373.trunk.patch.1, HDFS-3373.trunk.patch.2, 
 HDFS-3373.trunk.patch.3, HDFS-3373.trunk.patch.3, HDFS-3373.trunk.patch.4


 As noted by Nicholas in HDFS-3359, FileContext doesn't have a close() method, 
 and thus never calls DFSClient.close(). This means that, until finalizers 
 run, DFSClient will hold on to its SocketCache object and potentially have a 
 lot of outstanding sockets/fds held on to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3373) FileContext HDFS implementation can leak socket caches

2012-09-28 Thread John George (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465718#comment-13465718
 ] 

John George commented on HDFS-3373:
---

In 2.0, setting cache capacity to 0 disabled the cache, this (along with some 
other changes) was not in 0.23. TestSocketCache was a one test file to test if 
cache disabling worked. Since, 0.23 does not have that, I did not port the test.

 FileContext HDFS implementation can leak socket caches
 --

 Key: HDFS-3373
 URL: https://issues.apache.org/jira/browse/HDFS-3373
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Assignee: John George
 Fix For: 2.0.3-alpha

 Attachments: HDFS-3373.branch-23.patch, HDFS-3373.branch23.patch, 
 HDFS-3373.trunk.patch, HDFS-3373.trunk.patch.1, HDFS-3373.trunk.patch.2, 
 HDFS-3373.trunk.patch.3, HDFS-3373.trunk.patch.3, HDFS-3373.trunk.patch.4


 As noted by Nicholas in HDFS-3359, FileContext doesn't have a close() method, 
 and thus never calls DFSClient.close(). This means that, until finalizers 
 run, DFSClient will hold on to its SocketCache object and potentially have a 
 lot of outstanding sockets/fds held on to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3373) FileContext HDFS implementation can leak socket caches

2012-09-28 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465721#comment-13465721
 ] 

Robert Joseph Evans commented on HDFS-3373:
---

Makes since.  Because it is such a straight forward patch I feel OK checking 
the code in.  Thanks for the work John.

 FileContext HDFS implementation can leak socket caches
 --

 Key: HDFS-3373
 URL: https://issues.apache.org/jira/browse/HDFS-3373
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Assignee: John George
 Fix For: 2.0.3-alpha

 Attachments: HDFS-3373.branch-23.patch, HDFS-3373.branch23.patch, 
 HDFS-3373.trunk.patch, HDFS-3373.trunk.patch.1, HDFS-3373.trunk.patch.2, 
 HDFS-3373.trunk.patch.3, HDFS-3373.trunk.patch.3, HDFS-3373.trunk.patch.4


 As noted by Nicholas in HDFS-3359, FileContext doesn't have a close() method, 
 and thus never calls DFSClient.close(). This means that, until finalizers 
 run, DFSClient will hold on to its SocketCache object and potentially have a 
 lot of outstanding sockets/fds held on to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3373) FileContext HDFS implementation can leak socket caches

2012-09-28 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HDFS-3373:
--

  Resolution: Fixed
   Fix Version/s: 0.23.4
Target Version/s: 2.0.0-alpha, 0.23.3  (was: 0.23.3, 2.0.0-alpha)
  Status: Resolved  (was: Patch Available)

I pulled this into branch-0.23 too

 FileContext HDFS implementation can leak socket caches
 --

 Key: HDFS-3373
 URL: https://issues.apache.org/jira/browse/HDFS-3373
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Todd Lipcon
Assignee: John George
 Fix For: 0.23.4, 2.0.3-alpha

 Attachments: HDFS-3373.branch-23.patch, HDFS-3373.branch23.patch, 
 HDFS-3373.trunk.patch, HDFS-3373.trunk.patch.1, HDFS-3373.trunk.patch.2, 
 HDFS-3373.trunk.patch.3, HDFS-3373.trunk.patch.3, HDFS-3373.trunk.patch.4


 As noted by Nicholas in HDFS-3359, FileContext doesn't have a close() method, 
 and thus never calls DFSClient.close(). This means that, until finalizers 
 run, DFSClient will hold on to its SocketCache object and potentially have a 
 lot of outstanding sockets/fds held on to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-3991) NN webui should mention safemode

2012-09-28 Thread Andy Isaacson (JIRA)
Andy Isaacson created HDFS-3991:
---

 Summary: NN webui should mention safemode
 Key: HDFS-3991
 URL: https://issues.apache.org/jira/browse/HDFS-3991
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Andy Isaacson
Assignee: Andy Isaacson
Priority: Minor


The dfshealth.jsp page should say in safemode when the NN is in safemode.  
Perhaps it should also give a reason for safemode, like waiting for 888/999 
blocks to become available, 333 available currently.

Also it could mention how many DNs are expected (or were seen before the last 
reboot, or similar).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3991) NN webui should mention safemode more clearly

2012-09-28 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson updated HDFS-3991:


Summary: NN webui should mention safemode more clearly  (was: NN webui 
should mention safemode)

 NN webui should mention safemode more clearly
 -

 Key: HDFS-3991
 URL: https://issues.apache.org/jira/browse/HDFS-3991
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Andy Isaacson
Assignee: Andy Isaacson
Priority: Minor

 The dfshealth.jsp page should say in safemode when the NN is in safemode.  
 Perhaps it should also give a reason for safemode, like waiting for 888/999 
 blocks to become available, 333 available currently.
 Also it could mention how many DNs are expected (or were seen before the last 
 reboot, or similar).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3991) NN webui should mention safemode more clearly

2012-09-28 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson updated HDFS-3991:


Description: 
The dfshealth.jsp page says

bq. Safe mode is ON. The reported blocks 0 needs additional 308 blocks to reach 
the threshold 0.9990 of total blocks 308. Safe mode will be turned off 
automatically.

when safemode is on, and nothing, when safemode is off.  I suggest it should 
say Safe mode is off. when the NN is not in safemode.

It would also be nice if the message said how many DNs are expected (or were 
seen before the last reboot, or similar).

  was:
The dfshealth.jsp page should say in safemode when the NN is in safemode.  
Perhaps it should also give a reason for safemode, like waiting for 888/999 
blocks to become available, 333 available currently.

Also it could mention how many DNs are expected (or were seen before the last 
reboot, or similar).


 NN webui should mention safemode more clearly
 -

 Key: HDFS-3991
 URL: https://issues.apache.org/jira/browse/HDFS-3991
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Andy Isaacson
Assignee: Andy Isaacson
Priority: Minor

 The dfshealth.jsp page says
 bq. Safe mode is ON. The reported blocks 0 needs additional 308 blocks to 
 reach the threshold 0.9990 of total blocks 308. Safe mode will be turned off 
 automatically.
 when safemode is on, and nothing, when safemode is off.  I suggest it should 
 say Safe mode is off. when the NN is not in safemode.
 It would also be nice if the message said how many DNs are expected (or were 
 seen before the last reboot, or similar).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3701) HDFS may miss the final block when reading a file opened for writing if one of the datanode is dead

2012-09-28 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465901#comment-13465901
 ] 

Matt Foley commented on HDFS-3701:
--

merged to branch-1.1

 HDFS may miss the final block when reading a file opened for writing if one 
 of the datanode is dead
 ---

 Key: HDFS-3701
 URL: https://issues.apache.org/jira/browse/HDFS-3701
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 1.0.3
Reporter: nkeywal
Assignee: nkeywal
Priority: Critical
 Fix For: 1.1.0

 Attachments: HDFS-3701.branch-1.v2.merged.patch, 
 HDFS-3701.branch-1.v3.patch, HDFS-3701.branch-1.v4.patch, 
 HDFS-3701.ontopof.v1.patch, HDFS-3701.patch


 When the file is opened for writing, the DFSClient calls one of the datanode 
 owning the last block to get its size. If this datanode is dead, the socket 
 exception is shallowed and the size of this last block is equals to zero. 
 This seems to be fixed on trunk, but I didn't find a related Jira. On 1.0.3, 
 it's not fixed. It's on the same area as HDFS-1950 or HDFS-3222.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2751) Datanode drops OS cache behind reads even for short reads

2012-09-28 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465905#comment-13465905
 ] 

Matt Foley commented on HDFS-2751:
--

merged to branch-1.1

 Datanode drops OS cache behind reads even for short reads
 -

 Key: HDFS-2751
 URL: https://issues.apache.org/jira/browse/HDFS-2751
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.0, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.24.0, 0.23.1, 1.1.0

 Attachments: HDFS-2751.branch-1.patch, hdfs-2751.txt, hdfs-2751.txt


 HDFS-2465 has some code which attempts to disable the drop cache behind 
 reads functionality when the reads are 256KB (eg HBase random access). But 
 this check was missing in the {{close()}} function, so it always drops cache 
 behind reads regardless of the size of the read. This hurts HBase random read 
 performance when this patch is enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2751) Datanode drops OS cache behind reads even for short reads

2012-09-28 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HDFS-2751:
-

Fix Version/s: (was: 1.2.0)
   1.1.0

 Datanode drops OS cache behind reads even for short reads
 -

 Key: HDFS-2751
 URL: https://issues.apache.org/jira/browse/HDFS-2751
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.0, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.24.0, 0.23.1, 1.1.0

 Attachments: HDFS-2751.branch-1.patch, hdfs-2751.txt, hdfs-2751.txt


 HDFS-2465 has some code which attempts to disable the drop cache behind 
 reads functionality when the reads are 256KB (eg HBase random access). But 
 this check was missing in the {{close()}} function, so it always drops cache 
 behind reads regardless of the size of the read. This hurts HBase random read 
 performance when this patch is enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2751) Datanode drops OS cache behind reads even for short reads

2012-09-28 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HDFS-2751:
-

Target Version/s: 0.23.0, 0.24.0, 1.1.0  (was: 0.23.0, 0.24.0)

 Datanode drops OS cache behind reads even for short reads
 -

 Key: HDFS-2751
 URL: https://issues.apache.org/jira/browse/HDFS-2751
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.23.0, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.24.0, 0.23.1, 1.1.0

 Attachments: HDFS-2751.branch-1.patch, hdfs-2751.txt, hdfs-2751.txt


 HDFS-2465 has some code which attempts to disable the drop cache behind 
 reads functionality when the reads are 256KB (eg HBase random access). But 
 this check was missing in the {{close()}} function, so it always drops cache 
 behind reads regardless of the size of the read. This hurts HBase random read 
 performance when this patch is enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3979) Fix hsync and hflush semantics.

2012-09-28 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465915#comment-13465915
 ] 

Lars Hofhansl commented on HDFS-3979:
-

Enqueing the seqno at end seems like the best approach. (Indeed this is done in 
the 0.20.x code as both of you said). 
I wonder why this was changed? Will have a new patch momentarily.


 Fix hsync and hflush semantics.
 ---

 Key: HDFS-3979
 URL: https://issues.apache.org/jira/browse/HDFS-3979
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node, hdfs client
Affects Versions: 0.22.0, 0.23.0, 2.0.0-alpha
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Attachments: hdfs-3979-sketch.txt


 See discussion in HDFS-744. The actual sync/flush operation in BlockReceiver 
 is not on a synchronous path from the DFSClient, hence it is possible that a 
 DN loses data that it has already acknowledged as persisted to a client.
 Edit: Spelling.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3979) Fix hsync and hflush semantics.

2012-09-28 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HDFS-3979:


Attachment: hdfs-3979-v2.txt

New patch. Order of local operations and waiting for downstream DNs now 
reflects the pre HDFS-265 logic.

 Fix hsync and hflush semantics.
 ---

 Key: HDFS-3979
 URL: https://issues.apache.org/jira/browse/HDFS-3979
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node, hdfs client
Affects Versions: 0.22.0, 0.23.0, 2.0.0-alpha
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Attachments: hdfs-3979-sketch.txt, hdfs-3979-v2.txt


 See discussion in HDFS-744. The actual sync/flush operation in BlockReceiver 
 is not on a synchronous path from the DFSClient, hence it is possible that a 
 DN loses data that it has already acknowledged as persisted to a client.
 Edit: Spelling.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3979) Fix hsync and hflush semantics.

2012-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465987#comment-13465987
 ] 

Hadoop QA commented on HDFS-3979:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12547049/hdfs-3979-v2.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3247//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3247//console

This message is automatically generated.

 Fix hsync and hflush semantics.
 ---

 Key: HDFS-3979
 URL: https://issues.apache.org/jira/browse/HDFS-3979
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node, hdfs client
Affects Versions: 0.22.0, 0.23.0, 2.0.0-alpha
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Attachments: hdfs-3979-sketch.txt, hdfs-3979-v2.txt


 See discussion in HDFS-744. The actual sync/flush operation in BlockReceiver 
 is not on a synchronous path from the DFSClient, hence it is possible that a 
 DN loses data that it has already acknowledged as persisted to a client.
 Edit: Spelling.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3698) TestHftpFileSystem is failing in branch-1 due to changed default secure port

2012-09-28 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466022#comment-13466022
 ] 

Aaron T. Myers commented on HDFS-3698:
--

Thanks guys.

 TestHftpFileSystem is failing in branch-1 due to changed default secure port
 

 Key: HDFS-3698
 URL: https://issues.apache.org/jira/browse/HDFS-3698
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.0, 1.2.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 1.1.0, 1.2.0

 Attachments: HDFS-3698.patch


 This test is failing since the default secure port changed to the HTTP port 
 upon the commit of HDFS-2617.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3896) Add place holder for dfs.namenode.rpc-address and dfs.namenode.servicerpc-address to hdfs-default.xml

2012-09-28 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466051#comment-13466051
 ] 

Aaron T. Myers commented on HDFS-3896:
--

+1, the latest patch looks good to me. I'm going to commit this momentarily.

 Add place holder for dfs.namenode.rpc-address and 
 dfs.namenode.servicerpc-address to hdfs-default.xml
 -

 Key: HDFS-3896
 URL: https://issues.apache.org/jira/browse/HDFS-3896
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Jeff Lord
Assignee: Jeff Lord
Priority: Minor
 Attachments: hdfs-default-1.patch, hdfs-default-2.patch, 
 hdfs-default-3.patch, hdfs-default.patch


 Currently there are mentions of these properties in the docs but not much 
 else.
 Would make sense to have empty place holders in hdfs-default.xml to clarify 
 where they go and what they are.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3896) Add descriptions for dfs.namenode.rpc-address and dfs.namenode.servicerpc-address to hdfs-default.xml

2012-09-28 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3896:
-

Summary: Add descriptions for dfs.namenode.rpc-address and 
dfs.namenode.servicerpc-address to hdfs-default.xml  (was: Add place holder for 
dfs.namenode.rpc-address and dfs.namenode.servicerpc-address to 
hdfs-default.xml)

 Add descriptions for dfs.namenode.rpc-address and 
 dfs.namenode.servicerpc-address to hdfs-default.xml
 -

 Key: HDFS-3896
 URL: https://issues.apache.org/jira/browse/HDFS-3896
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Jeff Lord
Assignee: Jeff Lord
Priority: Minor
 Attachments: hdfs-default-1.patch, hdfs-default-2.patch, 
 hdfs-default-3.patch, hdfs-default.patch


 Currently there are mentions of these properties in the docs but not much 
 else.
 Would make sense to have empty place holders in hdfs-default.xml to clarify 
 where they go and what they are.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3896) Add descriptions for dfs.namenode.rpc-address and dfs.namenode.servicerpc-address to hdfs-default.xml

2012-09-28 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3896:
-

  Resolution: Fixed
   Fix Version/s: 2.0.3-alpha
Target Version/s:   (was: 2.0.2-alpha)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2. Thanks a lot for the 
contribution, Jeff.

 Add descriptions for dfs.namenode.rpc-address and 
 dfs.namenode.servicerpc-address to hdfs-default.xml
 -

 Key: HDFS-3896
 URL: https://issues.apache.org/jira/browse/HDFS-3896
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Jeff Lord
Assignee: Jeff Lord
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: hdfs-default-1.patch, hdfs-default-2.patch, 
 hdfs-default-3.patch, hdfs-default.patch


 Currently there are mentions of these properties in the docs but not much 
 else.
 Would make sense to have empty place holders in hdfs-default.xml to clarify 
 where they go and what they are.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3976) SampleQuantiles#query is O(N^2) instead of O(N)

2012-09-28 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466054#comment-13466054
 ] 

Aaron T. Myers commented on HDFS-3976:
--

The patch looks good to me as well. Committing momentarily.

 SampleQuantiles#query is O(N^2) instead of O(N)
 ---

 Key: HDFS-3976
 URL: https://issues.apache.org/jira/browse/HDFS-3976
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: hdfs-3976-1.patch


 SampleQuantiles#query() does O(N) calls LinkedList#get() in a loop, rather 
 than using an iterator. This makes query O(N^2), rather than O(N).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3896) Add descriptions for dfs.namenode.rpc-address and dfs.namenode.servicerpc-address to hdfs-default.xml

2012-09-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466060#comment-13466060
 ] 

Hudson commented on HDFS-3896:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #2809 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2809/])
HDFS-3896. Add descriptions for dfs.namenode.rpc-address and 
dfs.namenode.servicerpc-address to hdfs-default.xml. Contributed by Jeff Lord. 
(Revision 1391708)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1391708
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Add descriptions for dfs.namenode.rpc-address and 
 dfs.namenode.servicerpc-address to hdfs-default.xml
 -

 Key: HDFS-3896
 URL: https://issues.apache.org/jira/browse/HDFS-3896
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Jeff Lord
Assignee: Jeff Lord
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: hdfs-default-1.patch, hdfs-default-2.patch, 
 hdfs-default-3.patch, hdfs-default.patch


 Currently there are mentions of these properties in the docs but not much 
 else.
 Would make sense to have empty place holders in hdfs-default.xml to clarify 
 where they go and what they are.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3896) Add descriptions for dfs.namenode.rpc-address and dfs.namenode.servicerpc-address to hdfs-default.xml

2012-09-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466063#comment-13466063
 ] 

Hudson commented on HDFS-3896:
--

Integrated in Hadoop-Common-trunk-Commit #2787 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2787/])
HDFS-3896. Add descriptions for dfs.namenode.rpc-address and 
dfs.namenode.servicerpc-address to hdfs-default.xml. Contributed by Jeff Lord. 
(Revision 1391708)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1391708
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Add descriptions for dfs.namenode.rpc-address and 
 dfs.namenode.servicerpc-address to hdfs-default.xml
 -

 Key: HDFS-3896
 URL: https://issues.apache.org/jira/browse/HDFS-3896
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Jeff Lord
Assignee: Jeff Lord
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: hdfs-default-1.patch, hdfs-default-2.patch, 
 hdfs-default-3.patch, hdfs-default.patch


 Currently there are mentions of these properties in the docs but not much 
 else.
 Would make sense to have empty place holders in hdfs-default.xml to clarify 
 where they go and what they are.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3896) Add descriptions for dfs.namenode.rpc-address and dfs.namenode.servicerpc-address to hdfs-default.xml

2012-09-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466065#comment-13466065
 ] 

Hudson commented on HDFS-3896:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2850 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2850/])
HDFS-3896. Add descriptions for dfs.namenode.rpc-address and 
dfs.namenode.servicerpc-address to hdfs-default.xml. Contributed by Jeff Lord. 
(Revision 1391708)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1391708
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Add descriptions for dfs.namenode.rpc-address and 
 dfs.namenode.servicerpc-address to hdfs-default.xml
 -

 Key: HDFS-3896
 URL: https://issues.apache.org/jira/browse/HDFS-3896
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Jeff Lord
Assignee: Jeff Lord
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: hdfs-default-1.patch, hdfs-default-2.patch, 
 hdfs-default-3.patch, hdfs-default.patch


 Currently there are mentions of these properties in the docs but not much 
 else.
 Would make sense to have empty place holders in hdfs-default.xml to clarify 
 where they go and what they are.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3979) Fix hsync and hflush semantics.

2012-09-28 Thread Kan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466082#comment-13466082
 ] 

Kan Zhang commented on HDFS-3979:
-

bq. I wonder why this was changed?

My guess is HDFS-265 intends to implement API3 rather than API4. 
https://issues.apache.org/jira/browse/HDFS-265?focusedCommentId=12710542page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12710542

 Fix hsync and hflush semantics.
 ---

 Key: HDFS-3979
 URL: https://issues.apache.org/jira/browse/HDFS-3979
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node, hdfs client
Affects Versions: 0.22.0, 0.23.0, 2.0.0-alpha
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Attachments: hdfs-3979-sketch.txt, hdfs-3979-v2.txt


 See discussion in HDFS-744. The actual sync/flush operation in BlockReceiver 
 is not on a synchronous path from the DFSClient, hence it is possible that a 
 DN loses data that it has already acknowledged as persisted to a client.
 Edit: Spelling.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3979) Fix hsync and hflush semantics.

2012-09-28 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13466113#comment-13466113
 ] 

Lars Hofhansl commented on HDFS-3979:
-

I see. Thanks Kan. So now we we have API4 and (with HDFS-744) API5.

For applications like HBase we'd like API4 as well as API5.
(API4 allows a hypothetical kill -9 of all DNs without loss of acknowledged 
data, API5 allows HW failures of all data nodes - i.e. a DC outage - with loss 
of acknowledged data)


 Fix hsync and hflush semantics.
 ---

 Key: HDFS-3979
 URL: https://issues.apache.org/jira/browse/HDFS-3979
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node, hdfs client
Affects Versions: 0.22.0, 0.23.0, 2.0.0-alpha
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Attachments: hdfs-3979-sketch.txt, hdfs-3979-v2.txt


 See discussion in HDFS-744. The actual sync/flush operation in BlockReceiver 
 is not on a synchronous path from the DFSClient, hence it is possible that a 
 DN loses data that it has already acknowledged as persisted to a client.
 Edit: Spelling.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira