[jira] [Commented] (HDFS-2436) FSNamesystem.setTimes(..) expects the path is a file.

2011-12-21 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13173949#comment-13173949
 ] 

Uma Maheswara Rao G commented on HDFS-2436:
---

Thanks Konstnatin,
I was not known about previos discussion and thought, that was a missing 
behaviour.
Also i did not find any test for not supporting on directories. :-(. 
And other point already mentioned by Nicholas.

{quote}
It is quite confusing whether atime and mtime are supported for directories. 
Let file a JIRA to fix it.
{quote}
You mean to move the fields(atime and mtime) to INodeFile if it supports only 
for files?

Thanks
Uma



 FSNamesystem.setTimes(..) expects the path is a file.
 -

 Key: HDFS-2436
 URL: https://issues.apache.org/jira/browse/HDFS-2436
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Arpit Gupta
Assignee: Uma Maheswara Rao G
 Fix For: 0.23.0, 0.24.0

 Attachments: HDFS-2436.patch, HDFS-2436.patch, HDFS-2436.patch


 FSNamesystem.setTimes(..) does not work if the path is a directory.
 Arpit found this bug when testing webhdfs:
 {quote}
 settimes api is working when called on a file, but when called on a dir it 
 returns a 404. I should be able to set time on both a file and a directory.
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2436) FSNamesystem.setTimes(..) expects the path is a file.

2011-12-21 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13173955#comment-13173955
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-2436:
--

 You mean to move the fields(atime and mtime) to INodeFile if it supports only 
 for files?

If mtime and atime are supported only for files, it makes sense to move the 
fields to INodeFile since it reduces memory usage.

 FSNamesystem.setTimes(..) expects the path is a file.
 -

 Key: HDFS-2436
 URL: https://issues.apache.org/jira/browse/HDFS-2436
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Arpit Gupta
Assignee: Uma Maheswara Rao G
 Fix For: 0.23.0, 0.24.0

 Attachments: HDFS-2436.patch, HDFS-2436.patch, HDFS-2436.patch


 FSNamesystem.setTimes(..) does not work if the path is a directory.
 Arpit found this bug when testing webhdfs:
 {quote}
 settimes api is working when called on a file, but when called on a dir it 
 returns a 404. I should be able to set time on both a file and a directory.
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2712) setTimes should support only for files and move the mtimeatime fields down to iNodeFile.

2011-12-21 Thread Uma Maheswara Rao G (Created) (JIRA)
setTimes should support only for files and move the mtimeatime fields down to 
iNodeFile.
-

 Key: HDFS-2712
 URL: https://issues.apache.org/jira/browse/HDFS-2712
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0, 0.24.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G


After the dicussion in HDFS-2436, unsupported behaviour for setTimes was 
intentional (HADOOP-1869).

But current INode structure hierarchy seems, it supports atime and mtime for 
directories also. But as per HADOOP-1869, we are supporting only for files. To 
avoid the confusions, we can move the mtime and atime fields to INodeFile as we 
planned to support setTimes only for files. And also restrict the support for 
setTimes on directories ( which is implemented with HDFS-2436 ).


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2436) FSNamesystem.setTimes(..) expects the path is a file.

2011-12-21 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13173960#comment-13173960
 ] 

Uma Maheswara Rao G commented on HDFS-2436:
---

Thanks Nicholas, I just filed the JIRA HDFS-2712.
Goal for that JIRA is to move the atime and mtime fields down to iNodeFile as 
this will save the memory consumption with dirs and also support setTimes 
access only for directories. We can continue the further discussion in that 
JIRA.

Thanks
Uma

 FSNamesystem.setTimes(..) expects the path is a file.
 -

 Key: HDFS-2436
 URL: https://issues.apache.org/jira/browse/HDFS-2436
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Arpit Gupta
Assignee: Uma Maheswara Rao G
 Fix For: 0.23.0, 0.24.0

 Attachments: HDFS-2436.patch, HDFS-2436.patch, HDFS-2436.patch


 FSNamesystem.setTimes(..) does not work if the path is a directory.
 Arpit found this bug when testing webhdfs:
 {quote}
 settimes api is working when called on a file, but when called on a dir it 
 returns a 404. I should be able to set time on both a file and a directory.
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2185) HA: ZK-based FailoverController

2011-12-21 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13173966#comment-13173966
 ] 

Uma Maheswara Rao G commented on HDFS-2185:
---

Hi Todd,

 Small question before going through the proposal in detail.
 
   I think Zookeeper already has in-built leader election recipe 
implementations ready right. Are we going to reuse that implementations? 
Seems to me that, we are trying to implement the leader election again here. 

 Couple of JIRAs from Zookeeper: ZOOKEEPER-1209, ZOOKEEPER-1095, ZOOKEEPER-1080

Thanks
Uma

 HA: ZK-based FailoverController
 ---

 Key: HDFS-2185
 URL: https://issues.apache.org/jira/browse/HDFS-2185
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: HA branch (HDFS-1623)
Reporter: Eli Collins
Assignee: Todd Lipcon

 This jira is for a ZK-based FailoverController daemon. The FailoverController 
 is a separate daemon from the NN that does the following:
 * Initiates leader election (via ZK) when necessary
 * Performs health monitoring (aka failure detection)
 * Performs fail-over (standby to active and active to standby transitions)
 * Heartbeats to ensure the liveness
 It should have the same/similar interface as the Linux HA RM to aid 
 pluggability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2712) setTimes should support only for files and move the mtimeatime fields down to iNodeFile.

2011-12-21 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13173986#comment-13173986
 ] 

Suresh Srinivas commented on HDFS-2712:
---

I would prefer to leave mtime to capture creation time. This might come handy 
for snapshots. Is mtime not set to creation time currently?

 setTimes should support only for files and move the mtimeatime fields down 
 to iNodeFile.
 -

 Key: HDFS-2712
 URL: https://issues.apache.org/jira/browse/HDFS-2712
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0, 0.24.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G

 After the dicussion in HDFS-2436, unsupported behaviour for setTimes was 
 intentional (HADOOP-1869).
 But current INode structure hierarchy seems, it supports atime and mtime for 
 directories also. But as per HADOOP-1869, we are supporting only for files. 
 To avoid the confusions, we can move the mtime and atime fields to INodeFile 
 as we planned to support setTimes only for files. And also restrict the 
 support for setTimes on directories ( which is implemented with HDFS-2436 ).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2712) setTimes should support only for files and move the mtimeatime fields down to iNodeFile.

2011-12-21 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13173999#comment-13173999
 ] 

Uma Maheswara Rao G commented on HDFS-2712:
---

Agreed, we should leave mtime, as we are capturing the mtime while creating.
Also we will access the API getModficationTime api from INode, while logging 
the mkdir op into Editlog.
Nicholas, what do you say?


 setTimes should support only for files and move the mtimeatime fields down 
 to iNodeFile.
 -

 Key: HDFS-2712
 URL: https://issues.apache.org/jira/browse/HDFS-2712
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0, 0.24.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G

 After the dicussion in HDFS-2436, unsupported behaviour for setTimes was 
 intentional (HADOOP-1869).
 But current INode structure hierarchy seems, it supports atime and mtime for 
 directories also. But as per HADOOP-1869, we are supporting only for files. 
 To avoid the confusions, we can move the mtime and atime fields to INodeFile 
 as we planned to support setTimes only for files. And also restrict the 
 support for setTimes on directories ( which is implemented with HDFS-2436 ).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2712) setTimes should support only for files and move the atime field down to iNodeFile.

2011-12-21 Thread Uma Maheswara Rao G (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-2712:
--

Description: 
After the dicussion in HDFS-2436, unsupported behaviour for setTimes was 
intentional (HADOOP-1869).

But current INode structure hierarchy seems, it supports atime for directories 
also. But as per HADOOP-1869, we are supporting only for files. To avoid the 
confusions, we can move the atime fields to INodeFile as we planned to support 
setTimes only for files. And also restrict the support for setTimes on 
directories ( which is implemented with HDFS-2436 ).


  was:
After the dicussion in HDFS-2436, unsupported behaviour for setTimes was 
intentional (HADOOP-1869).

But current INode structure hierarchy seems, it supports atime and mtime for 
directories also. But as per HADOOP-1869, we are supporting only for files. To 
avoid the confusions, we can move the mtime and atime fields to INodeFile as we 
planned to support setTimes only for files. And also restrict the support for 
setTimes on directories ( which is implemented with HDFS-2436 ).


Summary: setTimes should support only for files and move the atime 
field down to iNodeFile.  (was: setTimes should support only for files and move 
the mtimeatime fields down to iNodeFile.)

 setTimes should support only for files and move the atime field down to 
 iNodeFile.
 --

 Key: HDFS-2712
 URL: https://issues.apache.org/jira/browse/HDFS-2712
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0, 0.24.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G

 After the dicussion in HDFS-2436, unsupported behaviour for setTimes was 
 intentional (HADOOP-1869).
 But current INode structure hierarchy seems, it supports atime for 
 directories also. But as per HADOOP-1869, we are supporting only for files. 
 To avoid the confusions, we can move the atime fields to INodeFile as we 
 planned to support setTimes only for files. And also restrict the support for 
 setTimes on directories ( which is implemented with HDFS-2436 ).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-13) filenames with ':' colon throws java.lang.IllegalArgumentException

2011-12-21 Thread Wijnand Suijlen (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174028#comment-13174028
 ] 

Wijnand Suijlen commented on HDFS-13:
-

I've recently been bitten by this issue, and while finding out why this is an 
issue at all, I was amazed that Hadoop tries to implement its own URL class 
which it calls org.apache.hadoop.fs.Path, and uses this class to interface with 
FileSystem. If I understand it correctly, the reason for this custom URL class, 
is that the user can't be bothered escaping his paths. But I think the current 
interface leads to confusion. If FileSystem worked with just java.net.URI 
instead of its own Path class then it is absolutely clear on how to construct 
paths and when extra escaping is necessary, because Sun's/Oracle's javadocs are 
very clear. Of course, when working from the command line, it might still be 
convenient to have a convenience class like the current 
org.apache.hadoop.fs.Path to ease the burden of writing well formed path names.

The colon thing is not the only problem, I found, related to 
org.apache.hadoop.fs.FileSystem and org.apache.hadoop.fs.Path. For example 
'FileSystem.checkPath' does string comparisons on the authority part. The 
problem there has almost been fixed by using a case insensitive string 
comparison, but it will still give bad results if one of the authorities is 
written with an IP address while the other is written with a DNS name. 





 filenames with ':' colon throws java.lang.IllegalArgumentException
 --

 Key: HDFS-13
 URL: https://issues.apache.org/jira/browse/HDFS-13
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Lohit Vijayarenu
 Attachments: 2066_20071022.patch, HADOOP-2066.patch


 File names containing colon : throws  java.lang.IllegalArgumentException 
 while LINUX file system supports it.
 $ hadoop dfs -put ./testfile-2007-09-24-03:00:00.gz filenametest
 Exception in thread main java.lang.IllegalArgumentException: 
 java.net.URISyntaxException: Relative path in absolute
 URI: testfile-2007-09-24-03:00:00.gz
   at org.apache.hadoop.fs.Path.initialize(Path.java:140)
   at org.apache.hadoop.fs.Path.init(Path.java:126)
   at org.apache.hadoop.fs.Path.init(Path.java:50)
   at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:273)
   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:117)
   at 
 org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:776)
   at 
 org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:757)
   at org.apache.hadoop.fs.FsShell.copyFromLocal(FsShell.java:116)
   at org.apache.hadoop.fs.FsShell.run(FsShell.java:1229)
   at org.apache.hadoop.util.ToolBase.doMain(ToolBase.java:187)
   at org.apache.hadoop.fs.FsShell.main(FsShell.java:1342)
 Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
 testfile-2007-09-24-03:00:00.gz
   at java.net.URI.checkPath(URI.java:1787)
   at java.net.URI.init(URI.java:735)
   at org.apache.hadoop.fs.Path.initialize(Path.java:137)
   ... 10 more
 Path(String pathString) when given a filename which contains ':' treats it as 
 URI and selects anything before ':' as
 scheme, which in this case is clearly not a valid scheme.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2657) TestHttpFSServer and TestServerWebApp are failing on trunk

2011-12-21 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174049#comment-13174049
 ] 

Hudson commented on HDFS-2657:
--

Integrated in Hadoop-Hdfs-trunk #901 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/901/])
HDFS-2657. TestHttpFSServer and TestServerWebApp are failing on trunk. 
(tucu)

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1221580
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/DummyGroupMapping.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/TestServerWebApp1.properties
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/TestServerWebApp2.properties
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/testserverwebapp1.properties
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/testserverwebapp2.properties
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 TestHttpFSServer and TestServerWebApp are failing on trunk
 --

 Key: HDFS-2657
 URL: https://issues.apache.org/jira/browse/HDFS-2657
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Attachments: HDFS-2657.patch


  org.apache.hadoop.fs.http.server.TestHttpFSServer.instrumentation
  org.apache.hadoop.lib.servlet.TestServerWebApp.lifecycle

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2646) Hadoop HttpFS introduced 4 findbug warnings.

2011-12-21 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174047#comment-13174047
 ] 

Hudson commented on HDFS-2646:
--

Integrated in Hadoop-Hdfs-trunk #901 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/901/])
HDFS-2646. Hadoop HttpFS introduced 4 findbug warnings. (tucu)

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1221572
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/dev-support
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/dev-support/findbugsExcludeFile.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/lang/XException.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/InputStreamEntity.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Hadoop HttpFS introduced 4 findbug warnings.
 

 Key: HDFS-2646
 URL: https://issues.apache.org/jira/browse/HDFS-2646
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.24.0, 0.23.1
Reporter: Uma Maheswara Rao G
Assignee: Alejandro Abdelnur
 Fix For: 0.24.0, 0.23.1

 Attachments: HDFS-2646.patch, HDFS-2646.patch


 https://builds.apache.org/job/PreCommit-HDFS-Build/1665//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2646) Hadoop HttpFS introduced 4 findbug warnings.

2011-12-21 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174059#comment-13174059
 ] 

Hudson commented on HDFS-2646:
--

Integrated in Hadoop-Hdfs-0.23-Build #114 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/114/])
Merge -r 1221571:1221572 from trunk to branch. FIXES: HDFS-2646

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1221576
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/dev-support
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/dev-support/findbugsExcludeFile.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/lang/XException.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/InputStreamEntity.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Hadoop HttpFS introduced 4 findbug warnings.
 

 Key: HDFS-2646
 URL: https://issues.apache.org/jira/browse/HDFS-2646
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.24.0, 0.23.1
Reporter: Uma Maheswara Rao G
Assignee: Alejandro Abdelnur
 Fix For: 0.24.0, 0.23.1

 Attachments: HDFS-2646.patch, HDFS-2646.patch


 https://builds.apache.org/job/PreCommit-HDFS-Build/1665//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2657) TestHttpFSServer and TestServerWebApp are failing on trunk

2011-12-21 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174061#comment-13174061
 ] 

Hudson commented on HDFS-2657:
--

Integrated in Hadoop-Hdfs-0.23-Build #114 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/114/])
Merge -r 1221579:1221580 from trunk to branch. FIXES: HDFS-2657

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1221584
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/DummyGroupMapping.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/TestServerWebApp1.properties
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/TestServerWebApp2.properties
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/testserverwebapp1.properties
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/testserverwebapp2.properties
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 TestHttpFSServer and TestServerWebApp are failing on trunk
 --

 Key: HDFS-2657
 URL: https://issues.apache.org/jira/browse/HDFS-2657
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Attachments: HDFS-2657.patch


  org.apache.hadoop.fs.http.server.TestHttpFSServer.instrumentation
  org.apache.hadoop.lib.servlet.TestServerWebApp.lifecycle

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2705) HttpFS server should check that upload requests have correct content-type

2011-12-21 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174067#comment-13174067
 ] 

Hudson commented on HDFS-2705:
--

Integrated in Hadoop-Hdfs-0.23-Build #114 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/114/])
Merge -r 1221615:1221616 from trunk to branch. FIXES: HDFS-2705

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1221619
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/CheckUploadContentTypeFilter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/webapp/WEB-INF/web.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestCheckUploadContentTypeFilter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 HttpFS server should check that upload requests have correct content-type
 -

 Key: HDFS-2705
 URL: https://issues.apache.org/jira/browse/HDFS-2705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.24.0, 0.23.1
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 0.24.0, 0.23.1

 Attachments: HDFS-2705.patch, HDFS-2705.patch, HDFS-2705.patch


 The append/create requests should require 'application/octet-stream' as 
 content-type when uploading data. This is to prevent the default content-type 
 form-encoded (used as default by some HTTP libraries) to be used or text 
 based content-types to be used.
 If the form-encoded content type is used, then Jersey tries to process the 
 upload stream as parameters
 If a test base content-type is used, HTTP proxies/gateways could do attempt 
 some transcoding on the stream thus corrupting the data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1972) HA: Datanode fencing mechanism

2011-12-21 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174069#comment-13174069
 ] 

Hudson commented on HDFS-1972:
--

Integrated in Hadoop-Hdfs-HAbranch-build #23 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-HAbranch-build/23/])
HDFS-1972. Fencing mechanism for block invalidations and replications. 
Contributed by Todd Lipcon.

todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1221608
Files : 
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-1623.txt
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/NumberReplicas.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReplicationBlocks.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDatasetAsyncDiskService.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ActiveState.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/HAState.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyState.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeAdapter.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencingWithReplication.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
* 
/hadoop/common/branches/HDFS-1623/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyIsHot.java


 HA: Datanode fencing mechanism
 --

 Key: HDFS-1972
 URL: https://issues.apache.org/jira/browse/HDFS-1972
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, ha, name-node
Reporter: Suresh Srinivas
Assignee: Todd Lipcon
 Fix For: HA branch (HDFS-1623)

 Attachments: hdfs-1972-v1.txt, hdfs-1972.txt, hdfs-1972.txt, 
 hdfs-1972.txt, hdfs-1972.txt, hdfs-1972.txt


 In high availability setup, with an active and standby namenode, there is a 
 possibility of two namenodes sending commands to the datanode. The datanode 
 must honor commands from only the active namenode and reject the commands 
 from standby, to prevent corruption. This invariant must be complied with 
 during fail over and other states such as split brain. This jira addresses 
 issues related to this, design of the solution and implementation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 

[jira] [Commented] (HDFS-2657) TestHttpFSServer and TestServerWebApp are failing on trunk

2011-12-21 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174082#comment-13174082
 ] 

Hudson commented on HDFS-2657:
--

Integrated in Hadoop-Mapreduce-0.23-Build #135 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/135/])
Merge -r 1221579:1221580 from trunk to branch. FIXES: HDFS-2657

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1221584
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/DummyGroupMapping.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/TestServerWebApp1.properties
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/TestServerWebApp2.properties
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/testserverwebapp1.properties
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/testserverwebapp2.properties
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 TestHttpFSServer and TestServerWebApp are failing on trunk
 --

 Key: HDFS-2657
 URL: https://issues.apache.org/jira/browse/HDFS-2657
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Attachments: HDFS-2657.patch


  org.apache.hadoop.fs.http.server.TestHttpFSServer.instrumentation
  org.apache.hadoop.lib.servlet.TestServerWebApp.lifecycle

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2705) HttpFS server should check that upload requests have correct content-type

2011-12-21 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174083#comment-13174083
 ] 

Hudson commented on HDFS-2705:
--

Integrated in Hadoop-Mapreduce-0.23-Build #135 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/135/])
Merge -r 1221615:1221616 from trunk to branch. FIXES: HDFS-2705

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1221619
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/CheckUploadContentTypeFilter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/webapp/WEB-INF/web.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestCheckUploadContentTypeFilter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 HttpFS server should check that upload requests have correct content-type
 -

 Key: HDFS-2705
 URL: https://issues.apache.org/jira/browse/HDFS-2705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.24.0, 0.23.1
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 0.24.0, 0.23.1

 Attachments: HDFS-2705.patch, HDFS-2705.patch, HDFS-2705.patch


 The append/create requests should require 'application/octet-stream' as 
 content-type when uploading data. This is to prevent the default content-type 
 form-encoded (used as default by some HTTP libraries) to be used or text 
 based content-types to be used.
 If the form-encoded content type is used, then Jersey tries to process the 
 upload stream as parameters
 If a test base content-type is used, HTTP proxies/gateways could do attempt 
 some transcoding on the stream thus corrupting the data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2646) Hadoop HttpFS introduced 4 findbug warnings.

2011-12-21 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174081#comment-13174081
 ] 

Hudson commented on HDFS-2646:
--

Integrated in Hadoop-Mapreduce-0.23-Build #135 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/135/])
Merge -r 1221571:1221572 from trunk to branch. FIXES: HDFS-2646

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1221576
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/dev-support
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/dev-support/findbugsExcludeFile.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/lang/XException.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/InputStreamEntity.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Hadoop HttpFS introduced 4 findbug warnings.
 

 Key: HDFS-2646
 URL: https://issues.apache.org/jira/browse/HDFS-2646
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.24.0, 0.23.1
Reporter: Uma Maheswara Rao G
Assignee: Alejandro Abdelnur
 Fix For: 0.24.0, 0.23.1

 Attachments: HDFS-2646.patch, HDFS-2646.patch


 https://builds.apache.org/job/PreCommit-HDFS-Build/1665//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2705) HttpFS server should check that upload requests have correct content-type

2011-12-21 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174090#comment-13174090
 ] 

Uma Maheswara Rao G commented on HDFS-2705:
---

Looks JIRA id missed in trunk CHANGES.txt :-)
{quote}
 HDFS-2657. TestHttpFSServer and TestServerWebApp are failing on trunk. 
(tucu)

HttpFS server should check that upload requests have correct 
content-type. (tucu)

Release 0.23.1 - UNRELEASED{quote}

 HttpFS server should check that upload requests have correct content-type
 -

 Key: HDFS-2705
 URL: https://issues.apache.org/jira/browse/HDFS-2705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.24.0, 0.23.1
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 0.24.0, 0.23.1

 Attachments: HDFS-2705.patch, HDFS-2705.patch, HDFS-2705.patch


 The append/create requests should require 'application/octet-stream' as 
 content-type when uploading data. This is to prevent the default content-type 
 form-encoded (used as default by some HTTP libraries) to be used or text 
 based content-types to be used.
 If the form-encoded content type is used, then Jersey tries to process the 
 upload stream as parameters
 If a test base content-type is used, HTTP proxies/gateways could do attempt 
 some transcoding on the stream thus corrupting the data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2646) Hadoop HttpFS introduced 4 findbug warnings.

2011-12-21 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174093#comment-13174093
 ] 

Hudson commented on HDFS-2646:
--

Integrated in Hadoop-Mapreduce-trunk #934 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/934/])
HDFS-2646. Hadoop HttpFS introduced 4 findbug warnings. (tucu)

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1221572
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/dev-support
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/dev-support/findbugsExcludeFile.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/lang/XException.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/InputStreamEntity.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Hadoop HttpFS introduced 4 findbug warnings.
 

 Key: HDFS-2646
 URL: https://issues.apache.org/jira/browse/HDFS-2646
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.24.0, 0.23.1
Reporter: Uma Maheswara Rao G
Assignee: Alejandro Abdelnur
 Fix For: 0.24.0, 0.23.1

 Attachments: HDFS-2646.patch, HDFS-2646.patch


 https://builds.apache.org/job/PreCommit-HDFS-Build/1665//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2657) TestHttpFSServer and TestServerWebApp are failing on trunk

2011-12-21 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174095#comment-13174095
 ] 

Hudson commented on HDFS-2657:
--

Integrated in Hadoop-Mapreduce-trunk #934 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/934/])
HDFS-2657. TestHttpFSServer and TestServerWebApp are failing on trunk. 
(tucu)

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1221580
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/DummyGroupMapping.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/TestServerWebApp1.properties
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/TestServerWebApp2.properties
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/testserverwebapp1.properties
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/testserverwebapp2.properties
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 TestHttpFSServer and TestServerWebApp are failing on trunk
 --

 Key: HDFS-2657
 URL: https://issues.apache.org/jira/browse/HDFS-2657
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Attachments: HDFS-2657.patch


  org.apache.hadoop.fs.http.server.TestHttpFSServer.instrumentation
  org.apache.hadoop.lib.servlet.TestServerWebApp.lifecycle

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2707) HttpFS should read the hadoop-auth secret from a file instead inline from the configuration

2011-12-21 Thread Alejandro Abdelnur (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174124#comment-13174124
 ] 

Alejandro Abdelnur commented on HDFS-2707:
--

Run test-patch locally, javadoc/findbugs warnings are unrelated to the patch.


 HttpFS should read the hadoop-auth secret from a file instead inline from the 
 configuration
 ---

 Key: HDFS-2707
 URL: https://issues.apache.org/jira/browse/HDFS-2707
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 0.24.0, 0.23.1
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 0.24.0, 0.23.1

 Attachments: HDFS-2707.patch, HDFS-2707.patch


 Similar to HADOOP-7621, the secret should be in a file other than the 
 configuration file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2713) HA : An alternative approach to clients handling Namenode failover.

2011-12-21 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174148#comment-13174148
 ] 

Uma Maheswara Rao G commented on HDFS-2713:
---

While writing a file if Active Namenode goes down, the thread cannot wait 
forever until the Standby Namenode gets fully functional as Active. We place a 
maximum limit on how long the user thread can wait for a Namenode to be 
available. For futher discussions, can term this limit as namenodeAwaitTimeout. 
If Namenode is not available even after namenodeAwaitTimeout, DFS Client will 
throw an Exception back to the caller. 

Corresponding to Namenode pair, there is a HASwitchAgent which is responsible 
for managing connections to multiple Namenode addresses configured. The 
HASwitchAgent exposes a performFailover() API, to trigger the switching logic. 
During any DFS operation, if RetryInvocationHandler finds that Active Namenode 
is down, the performFailover() API will be invoked. Then the HASwitchAgent will 
execute the switching procedure in a seperate thread, by trying to connect to 
the two Namenode addresses configured. This search for finding the Active 
Namenode will continue until it successfully connects to an Active Namenode.

The thread which invoked the performFailover() API will go to a TIMED WAIT and 
will be notified when the Active Namenode is available. The WAIT will happen 
for the time configured as 'namenode.await.timeout'. Once this time has 
elapsed, an IOException will be thrown from performFailover() API of 
HASwitchAgent. The subsequent invocations of the same API may already find that 
the switching logic is in progress and just make the calling thread to wait for 
'namenode.await.timeout'.

 HA : An alternative approach to clients handling  Namenode failover.
 

 Key: HDFS-2713
 URL: https://issues.apache.org/jira/browse/HDFS-2713
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, hdfs client
Affects Versions: HA branch (HDFS-1623)
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G

 This is the approach for client failover which we adopted when we developed 
 HA for Hadoop. I would like to propose thia approach for others to review  
 include in the HA implementation, if found useful.
 This is similar to the ConfiguredProxyProvider in the sense that the it takes 
 the address of both the Namenodes as the input. The major differences I can 
 see from the current implementation are
 1) During failover, user threads can be controlled very accurately about *the 
 time they wait for active namenode* to be available, awaiting the retry. 
 Beyond this, the threads will not be made to wait; DFS Client will throw an 
 Exception indicating that the operation has failed.
 2) Failover happens in a seperate thread, not in the client application 
 threads. The thread will keep trying to find the Active Namenode until it 
 succeeds. 
 3) This also means that irrespective of whether the operation's RetryAction 
 is RETRY_FAILOVER or FAIL, the user thread can trigger the client's failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2185) HA: ZK-based FailoverController

2011-12-21 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174246#comment-13174246
 ] 

Todd Lipcon commented on HDFS-2185:
---

Yea, this is very similar to the leader election recipe - I planned to base the 
code somewhat on that code for best practices. But the major difference is that 
we need to do fencing as well, which requires that we leave a non-ephemeral 
node behind when our ephemeral node expires, so the new NN can fence the old.

 HA: ZK-based FailoverController
 ---

 Key: HDFS-2185
 URL: https://issues.apache.org/jira/browse/HDFS-2185
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: HA branch (HDFS-1623)
Reporter: Eli Collins
Assignee: Todd Lipcon

 This jira is for a ZK-based FailoverController daemon. The FailoverController 
 is a separate daemon from the NN that does the following:
 * Initiates leader election (via ZK) when necessary
 * Performs health monitoring (aka failure detection)
 * Performs fail-over (standby to active and active to standby transitions)
 * Heartbeats to ensure the liveness
 It should have the same/similar interface as the Linux HA RM to aid 
 pluggability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2714) HA: Fix test cases which use standalone FSNamesystems

2011-12-21 Thread Todd Lipcon (Created) (JIRA)
HA: Fix test cases which use standalone FSNamesystems
-

 Key: HDFS-2714
 URL: https://issues.apache.org/jira/browse/HDFS-2714
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, test
Affects Versions: HA branch (HDFS-1623)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Trivial


Several tests (eg TestEditLog, TestSaveNamespace) failed in the most recent 
build with an NPE inside of FSNamesystem.checkOperation. These tests set up a 
standalone FSN that isn't fully initialized. We just need to add a null check 
to deal with this case in checkOperation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2185) HA: ZK-based FailoverController

2011-12-21 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174253#comment-13174253
 ] 

Aaron T. Myers commented on HDFS-2185:
--

Note also that the recipes included in ZK aren't actually built/packaged, so 
we'll need to copy/paste the code somewhere into Hadoop and built it ourselves 
anyway, even if we used the recipe as-is.

 HA: ZK-based FailoverController
 ---

 Key: HDFS-2185
 URL: https://issues.apache.org/jira/browse/HDFS-2185
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: HA branch (HDFS-1623)
Reporter: Eli Collins
Assignee: Todd Lipcon

 This jira is for a ZK-based FailoverController daemon. The FailoverController 
 is a separate daemon from the NN that does the following:
 * Initiates leader election (via ZK) when necessary
 * Performs health monitoring (aka failure detection)
 * Performs fail-over (standby to active and active to standby transitions)
 * Heartbeats to ensure the liveness
 It should have the same/similar interface as the Linux HA RM to aid 
 pluggability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2185) HA: ZK-based FailoverController

2011-12-21 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174257#comment-13174257
 ] 

Aaron T. Myers commented on HDFS-2185:
--

Per a recommendation from Patrick Hunt, we might also consider taking a look at 
the [Netflix Curator|https://github.com/Netflix/curator], which includes a 
leader election recipe as well. It's Apache-licensed.

 HA: ZK-based FailoverController
 ---

 Key: HDFS-2185
 URL: https://issues.apache.org/jira/browse/HDFS-2185
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: HA branch (HDFS-1623)
Reporter: Eli Collins
Assignee: Todd Lipcon

 This jira is for a ZK-based FailoverController daemon. The FailoverController 
 is a separate daemon from the NN that does the following:
 * Initiates leader election (via ZK) when necessary
 * Performs health monitoring (aka failure detection)
 * Performs fail-over (standby to active and active to standby transitions)
 * Heartbeats to ensure the liveness
 It should have the same/similar interface as the Linux HA RM to aid 
 pluggability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2185) HA: ZK-based FailoverController

2011-12-21 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174263#comment-13174263
 ] 

Todd Lipcon commented on HDFS-2185:
---

Twitter's also got a nice library of ZK stuff. But I think copy-paste is 
probably easier so we can customize it to our needs and not have to pull in 
lots of transitive dependencies

 HA: ZK-based FailoverController
 ---

 Key: HDFS-2185
 URL: https://issues.apache.org/jira/browse/HDFS-2185
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: HA branch (HDFS-1623)
Reporter: Eli Collins
Assignee: Todd Lipcon

 This jira is for a ZK-based FailoverController daemon. The FailoverController 
 is a separate daemon from the NN that does the following:
 * Initiates leader election (via ZK) when necessary
 * Performs health monitoring (aka failure detection)
 * Performs fail-over (standby to active and active to standby transitions)
 * Heartbeats to ensure the liveness
 It should have the same/similar interface as the Linux HA RM to aid 
 pluggability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2185) HA: ZK-based FailoverController

2011-12-21 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174310#comment-13174310
 ] 

Uma Maheswara Rao G commented on HDFS-2185:
---

Ok, Todd thanks for the clarification.
ZOOKEEPER-1080 is the one we used for our internal HA implementation. Many 
cases has been handled based on the experiences ,testing and also running in 
production from last 6months.
That is also has State machine implementation as you proposed.
If you have some free go through once and if you find that is reasonable, we 
can take some code from there as well.
Also i can help in preparing some part of the patches.

Thanks
Uma

 HA: ZK-based FailoverController
 ---

 Key: HDFS-2185
 URL: https://issues.apache.org/jira/browse/HDFS-2185
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: HA branch (HDFS-1623)
Reporter: Eli Collins
Assignee: Todd Lipcon

 This jira is for a ZK-based FailoverController daemon. The FailoverController 
 is a separate daemon from the NN that does the following:
 * Initiates leader election (via ZK) when necessary
 * Performs health monitoring (aka failure detection)
 * Performs fail-over (standby to active and active to standby transitions)
 * Heartbeats to ensure the liveness
 It should have the same/similar interface as the Linux HA RM to aid 
 pluggability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2713) HA : An alternative approach to clients handling Namenode failover.

2011-12-21 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174317#comment-13174317
 ] 

Uma Maheswara Rao G commented on HDFS-2713:
---

Hi Todd/ ATM, Before moving, i wanted to know your opinions on this proposal.

 HA : An alternative approach to clients handling  Namenode failover.
 

 Key: HDFS-2713
 URL: https://issues.apache.org/jira/browse/HDFS-2713
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, hdfs client
Affects Versions: HA branch (HDFS-1623)
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G

 This is the approach for client failover which we adopted when we developed 
 HA for Hadoop. I would like to propose thia approach for others to review  
 include in the HA implementation, if found useful.
 This is similar to the ConfiguredProxyProvider in the sense that the it takes 
 the address of both the Namenodes as the input. The major differences I can 
 see from the current implementation are
 1) During failover, user threads can be controlled very accurately about *the 
 time they wait for active namenode* to be available, awaiting the retry. 
 Beyond this, the threads will not be made to wait; DFS Client will throw an 
 Exception indicating that the operation has failed.
 2) Failover happens in a seperate thread, not in the client application 
 threads. The thread will keep trying to find the Active Namenode until it 
 succeeds. 
 3) This also means that irrespective of whether the operation's RetryAction 
 is RETRY_FAILOVER or FAIL, the user thread can trigger the client's failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2291) HA: Checkpointing in an HA setup

2011-12-21 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174328#comment-13174328
 ] 

Eli Collins commented on HDFS-2291:
---

Ditto, option (b) seems preferable. I think we should minimize the difference 
between the 2NN and the SBN checkpointing since we'll have to support both.

 HA: Checkpointing in an HA setup
 

 Key: HDFS-2291
 URL: https://issues.apache.org/jira/browse/HDFS-2291
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: HA branch (HDFS-1623)
Reporter: Aaron T. Myers
Assignee: Todd Lipcon
 Fix For: HA branch (HDFS-1623)


 We obviously need to create checkpoints when HA is enabled. One thought is to 
 use a third, dedicated checkpointing node in addition to the active and 
 standby nodes. Another option would be to make the standby capable of also 
 performing the function of checkpointing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2185) HA: ZK-based FailoverController

2011-12-21 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174338#comment-13174338
 ] 

Todd Lipcon commented on HDFS-2185:
---

Great, thanks for the link, Uma. I will be sure to take a look.

My plan is to finish off the checkpointing work next (HDFS-2291) and then go 
into a testing cycle for manual failover to make sure everything's robust. 
Unless we have a robust functional manual failover, automatic failover is just 
going to add some complication. After we're reasonably confident in the manual 
operation, we can start in earnest on the ZK-based automatic work. Do you agree?

(of course it's good to start discussing design for the automatic one in 
parallel)

 HA: ZK-based FailoverController
 ---

 Key: HDFS-2185
 URL: https://issues.apache.org/jira/browse/HDFS-2185
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: HA branch (HDFS-1623)
Reporter: Eli Collins
Assignee: Todd Lipcon

 This jira is for a ZK-based FailoverController daemon. The FailoverController 
 is a separate daemon from the NN that does the following:
 * Initiates leader election (via ZK) when necessary
 * Performs health monitoring (aka failure detection)
 * Performs fail-over (standby to active and active to standby transitions)
 * Heartbeats to ensure the liveness
 It should have the same/similar interface as the Linux HA RM to aid 
 pluggability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2712) setTimes should support only for files and move the atime field down to iNodeFile.

2011-12-21 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174340#comment-13174340
 ] 

Uma Maheswara Rao G commented on HDFS-2712:
---

{quote}
setTimes() for directories should throw an exception (FileNotFoundException or 
UnsupportedActionException?)
{quote}
I think here two cases, one is for non existent file and other is for 
directories. Both cases getINodeFile will return null. SO, to throw the 
exception, FileNotFoundException should be correct for non existent files and 
if src is a dir then UnsupportedActionException will have more meaning?
or since the passed argument value is not expected one, 
IllegalArgumentException?

 setTimes should support only for files and move the atime field down to 
 iNodeFile.
 --

 Key: HDFS-2712
 URL: https://issues.apache.org/jira/browse/HDFS-2712
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0, 0.24.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G

 After the dicussion in HDFS-2436, unsupported behaviour for setTimes was 
 intentional (HADOOP-1869).
 But current INode structure hierarchy seems, it supports atime for 
 directories also. But as per HADOOP-1869, we are supporting only for files. 
 To avoid the confusions, we can move the atime fields to INodeFile as we 
 planned to support setTimes only for files. And also restrict the support for 
 setTimes on directories ( which is implemented with HDFS-2436 ).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2713) HA : An alternative approach to clients handling Namenode failover.

2011-12-21 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174347#comment-13174347
 ] 

Aaron T. Myers commented on HDFS-2713:
--

Hey Uma, there are several things I don't understand about this proposal.

bq. 1) During failover, user threads can be controlled very accurately about 
the time they wait for active namenode to be available, awaiting the retry. 
Beyond this, the threads will not be made to wait; DFS Client will throw an 
Exception indicating that the operation has failed.

The current system already supports this. Clients will failover and retry with 
some random, exponential backoff for a finite period of time, after which the 
operation will fail, throwing an exception.

bq. 2) Failover happens in a seperate thread, not in the client application 
threads. The thread will keep trying to find the Active Namenode until it 
succeeds. 

What's the point of doing this in a separate thread? Given that client 
operations still block while the failover is attempted, it doesn't seem like 
this difference will be tangible to the user.

3) This also means that irrespective of whether the operation's RetryAction is 
RETRY_FAILOVER or FAIL, the user thread can trigger the client's failover.

This confuses me. How does this work?

In short, this proposal just seems _different_ and not necessarily _better_ 
than the current implementation. This implementation also seems like a more 
complex design to me, so without tangible user benefits I don't see much point 
in doing it.

The other thing that's not clear to me is how you'd propose to incorporate it 
into HDFS. Would it be an alternative to the current implementation? Or done as 
an enhancement to the current implementation?

 HA : An alternative approach to clients handling  Namenode failover.
 

 Key: HDFS-2713
 URL: https://issues.apache.org/jira/browse/HDFS-2713
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, hdfs client
Affects Versions: HA branch (HDFS-1623)
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G

 This is the approach for client failover which we adopted when we developed 
 HA for Hadoop. I would like to propose thia approach for others to review  
 include in the HA implementation, if found useful.
 This is similar to the ConfiguredProxyProvider in the sense that the it takes 
 the address of both the Namenodes as the input. The major differences I can 
 see from the current implementation are
 1) During failover, user threads can be controlled very accurately about *the 
 time they wait for active namenode* to be available, awaiting the retry. 
 Beyond this, the threads will not be made to wait; DFS Client will throw an 
 Exception indicating that the operation has failed.
 2) Failover happens in a seperate thread, not in the client application 
 threads. The thread will keep trying to find the Active Namenode until it 
 succeeds. 
 3) This also means that irrespective of whether the operation's RetryAction 
 is RETRY_FAILOVER or FAIL, the user thread can trigger the client's failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2712) setTimes should support only for files and move the atime field down to iNodeFile.

2011-12-21 Thread Konstantin Shvachko (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174353#comment-13174353
 ] 

Konstantin Shvachko commented on HDFS-2712:
---

IllegalArgumentException is a RuntimeException rather than IOException. We 
don't want to throw an unchecked exception here.
I agree file-not-found and not-a-file are two different cases.

 setTimes should support only for files and move the atime field down to 
 iNodeFile.
 --

 Key: HDFS-2712
 URL: https://issues.apache.org/jira/browse/HDFS-2712
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0, 0.24.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G

 After the dicussion in HDFS-2436, unsupported behaviour for setTimes was 
 intentional (HADOOP-1869).
 But current INode structure hierarchy seems, it supports atime for 
 directories also. But as per HADOOP-1869, we are supporting only for files. 
 To avoid the confusions, we can move the atime fields to INodeFile as we 
 planned to support setTimes only for files. And also restrict the support for 
 setTimes on directories ( which is implemented with HDFS-2436 ).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2713) HA : An alternative approach to clients handling Namenode failover.

2011-12-21 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174366#comment-13174366
 ] 

Uma Maheswara Rao G commented on HDFS-2713:
---

Thanks a lot, Aaron for taking a look.
{quote}
1) During failover, user threads can be controlled very accurately about 
the time they wait for active namenode to be available, awaiting the retry. 
Beyond this, the threads will not be made to wait; DFS Client will throw an 
Exception indicating that the operation has failed.

The current system already supports this. Clients will failover and retry with 
some random, exponential backoff for a finite period of time, after which the 
operation will fail, throwing an exception.
{quote}
Yes , i agree existing implementation also supports.But the main difference is, 
lets take a case with existing implementation, after a finite period of time, 
if it is not able to get active node proxy instance it will throw exception and 
fail. After some time if other call comes, then again it will do failover. But 
with my proposed one, Background thread will continue failover indefinitely 
until it finds active proxy instance. So, by the time next call come this 
background thread may make ready of active node proxy.

This is the main difference i wanted to explain.

{quote}
The other thing that's not clear to me is how you'd propose to incorporate it 
into HDFS. Would it be an alternative to the current implementation? Or done as 
an enhancement to the current implementation?
{quote}
here two ways, one way is to enhance the existing 
ConfiguredFailOverProxyProvider implementation to incorporate the this 
proposal. (separating the failover logic into separate background thread)
Other way is to keep the new proposal as separete FailOverProxyProvider 
implementation.

Which one is preferable for you?


 HA : An alternative approach to clients handling  Namenode failover.
 

 Key: HDFS-2713
 URL: https://issues.apache.org/jira/browse/HDFS-2713
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, hdfs client
Affects Versions: HA branch (HDFS-1623)
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G

 This is the approach for client failover which we adopted when we developed 
 HA for Hadoop. I would like to propose thia approach for others to review  
 include in the HA implementation, if found useful.
 This is similar to the ConfiguredProxyProvider in the sense that the it takes 
 the address of both the Namenodes as the input. The major differences I can 
 see from the current implementation are
 1) During failover, user threads can be controlled very accurately about *the 
 time they wait for active namenode* to be available, awaiting the retry. 
 Beyond this, the threads will not be made to wait; DFS Client will throw an 
 Exception indicating that the operation has failed.
 2) Failover happens in a seperate thread, not in the client application 
 threads. The thread will keep trying to find the Active Namenode until it 
 succeeds. 
 3) This also means that irrespective of whether the operation's RetryAction 
 is RETRY_FAILOVER or FAIL, the user thread can trigger the client's failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2715) start-dfs.sh falsely warns about processes already running

2011-12-21 Thread Eli Collins (Created) (JIRA)
start-dfs.sh falsely warns about processes already running
--

 Key: HDFS-2715
 URL: https://issues.apache.org/jira/browse/HDFS-2715
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.24.0
Reporter: Eli Collins


The sbin script pid detection is broken. Running star-dfs.sh indicates the 
following even if there are no processes are running and the pid dir is empty 
before starting.

{noformat}
hadoop-0.24.0-SNAPSHOT $ ./sbin/start-dfs.sh 
Starting namenodes on [localhost localhost]
localhost: starting namenode, logging to 
/home/eli/hadoop/dirs1/logs/eli/hadoop-eli-namenode-eli-thinkpad.out
localhost: namenode running as process 25256. Stop it first.
{noformat}

This may be in 23 as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2713) HA : An alternative approach to clients handling Namenode failover.

2011-12-21 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174373#comment-13174373
 ] 

Todd Lipcon commented on HDFS-2713:
---

IMO it seems preferable to enhance (or replace) the existing code rather than 
introduce a new option. There's no sense in supporting both if one has clear 
advantages.

If it's easier to write as new code, though, we could implement it as a new 
provider, then remove the old one when it gets committed.

 HA : An alternative approach to clients handling  Namenode failover.
 

 Key: HDFS-2713
 URL: https://issues.apache.org/jira/browse/HDFS-2713
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, hdfs client
Affects Versions: HA branch (HDFS-1623)
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G

 This is the approach for client failover which we adopted when we developed 
 HA for Hadoop. I would like to propose thia approach for others to review  
 include in the HA implementation, if found useful.
 This is similar to the ConfiguredProxyProvider in the sense that the it takes 
 the address of both the Namenodes as the input. The major differences I can 
 see from the current implementation are
 1) During failover, user threads can be controlled very accurately about *the 
 time they wait for active namenode* to be available, awaiting the retry. 
 Beyond this, the threads will not be made to wait; DFS Client will throw an 
 Exception indicating that the operation has failed.
 2) Failover happens in a seperate thread, not in the client application 
 threads. The thread will keep trying to find the Active Namenode until it 
 succeeds. 
 3) This also means that irrespective of whether the operation's RetryAction 
 is RETRY_FAILOVER or FAIL, the user thread can trigger the client's failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2185) HA: ZK-based FailoverController

2011-12-21 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174376#comment-13174376
 ] 

Uma Maheswara Rao G commented on HDFS-2185:
---

That's Great!
Completely Agreed with you, for completing manual failover first.:-)
Ok, lets continue the discussions on design parallely whenever we find the time.

 HA: ZK-based FailoverController
 ---

 Key: HDFS-2185
 URL: https://issues.apache.org/jira/browse/HDFS-2185
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha
Affects Versions: HA branch (HDFS-1623)
Reporter: Eli Collins
Assignee: Todd Lipcon

 This jira is for a ZK-based FailoverController daemon. The FailoverController 
 is a separate daemon from the NN that does the following:
 * Initiates leader election (via ZK) when necessary
 * Performs health monitoring (aka failure detection)
 * Performs fail-over (standby to active and active to standby transitions)
 * Heartbeats to ensure the liveness
 It should have the same/similar interface as the Linux HA RM to aid 
 pluggability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2713) HA : An alternative approach to clients handling Namenode failover.

2011-12-21 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174390#comment-13174390
 ] 

Uma Maheswara Rao G commented on HDFS-2713:
---

{quote}
IMO it seems preferable to enhance (or replace) the existing code rather than 
introduce a new option.
{quote}
Make sense to me.

 HA : An alternative approach to clients handling  Namenode failover.
 

 Key: HDFS-2713
 URL: https://issues.apache.org/jira/browse/HDFS-2713
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, hdfs client
Affects Versions: HA branch (HDFS-1623)
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G

 This is the approach for client failover which we adopted when we developed 
 HA for Hadoop. I would like to propose thia approach for others to review  
 include in the HA implementation, if found useful.
 This is similar to the ConfiguredProxyProvider in the sense that the it takes 
 the address of both the Namenodes as the input. The major differences I can 
 see from the current implementation are
 1) During failover, user threads can be controlled very accurately about *the 
 time they wait for active namenode* to be available, awaiting the retry. 
 Beyond this, the threads will not be made to wait; DFS Client will throw an 
 Exception indicating that the operation has failed.
 2) Failover happens in a seperate thread, not in the client application 
 threads. The thread will keep trying to find the Active Namenode until it 
 succeeds. 
 3) This also means that irrespective of whether the operation's RetryAction 
 is RETRY_FAILOVER or FAIL, the user thread can trigger the client's failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2712) setTimes should support only for files and move the atime field down to iNodeFile.

2011-12-21 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174393#comment-13174393
 ] 

Uma Maheswara Rao G commented on HDFS-2712:
---

make sense, if there are no comments on adding UnsupportedActionException  for 
dirs and FNFE for nonexistent file, will provide patch tomorrow.
Thanks a lot, Konstantin for your time.

 setTimes should support only for files and move the atime field down to 
 iNodeFile.
 --

 Key: HDFS-2712
 URL: https://issues.apache.org/jira/browse/HDFS-2712
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0, 0.24.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G

 After the dicussion in HDFS-2436, unsupported behaviour for setTimes was 
 intentional (HADOOP-1869).
 But current INode structure hierarchy seems, it supports atime for 
 directories also. But as per HADOOP-1869, we are supporting only for files. 
 To avoid the confusions, we can move the atime fields to INodeFile as we 
 planned to support setTimes only for files. And also restrict the support for 
 setTimes on directories ( which is implemented with HDFS-2436 ).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2713) HA : An alternative approach to clients handling Namenode failover.

2011-12-21 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174419#comment-13174419
 ] 

Aaron T. Myers commented on HDFS-2713:
--

bq. Yes , i agree existing implementation also supports.But the main difference 
is, lets take a case with existing implementation, after a finite period of 
time, if it is not able to get active node proxy instance it will throw 
exception and fail. After some time if other call comes, then again it will do 
failover. But with my proposed one, Background thread will continue failover 
indefinitely until it finds active proxy instance. So, by the time next call 
come this background thread may make ready of active node proxy.

I still don't see what benefit the background thread has. In the case you 
describe, with the current implementation, the second client request (after the 
failed one which had timed out retrying/failing over) would just simply 
succeed, or fail over immediately and then succeed. So, the background thread 
won't have saved much if any work, and instead may indefinitely be doing 
(potentially unnecessary) work in the background.

What am I missing?

 HA : An alternative approach to clients handling  Namenode failover.
 

 Key: HDFS-2713
 URL: https://issues.apache.org/jira/browse/HDFS-2713
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, hdfs client
Affects Versions: HA branch (HDFS-1623)
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G

 This is the approach for client failover which we adopted when we developed 
 HA for Hadoop. I would like to propose thia approach for others to review  
 include in the HA implementation, if found useful.
 This is similar to the ConfiguredProxyProvider in the sense that the it takes 
 the address of both the Namenodes as the input. The major differences I can 
 see from the current implementation are
 1) During failover, user threads can be controlled very accurately about *the 
 time they wait for active namenode* to be available, awaiting the retry. 
 Beyond this, the threads will not be made to wait; DFS Client will throw an 
 Exception indicating that the operation has failed.
 2) Failover happens in a seperate thread, not in the client application 
 threads. The thread will keep trying to find the Active Namenode until it 
 succeeds. 
 3) This also means that irrespective of whether the operation's RetryAction 
 is RETRY_FAILOVER or FAIL, the user thread can trigger the client's failover. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2714) HA: Fix test cases which use standalone FSNamesystems

2011-12-21 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-2714:
--

Attachment: hdfs-2714.txt

Attached trivial patch fixes the failing tests

 HA: Fix test cases which use standalone FSNamesystems
 -

 Key: HDFS-2714
 URL: https://issues.apache.org/jira/browse/HDFS-2714
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, test
Affects Versions: HA branch (HDFS-1623)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Trivial
 Attachments: hdfs-2714.txt


 Several tests (eg TestEditLog, TestSaveNamespace) failed in the most recent 
 build with an NPE inside of FSNamesystem.checkOperation. These tests set up a 
 standalone FSN that isn't fully initialized. We just need to add a null check 
 to deal with this case in checkOperation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2698) BackupNode is downloading image from NameNode for every checkpoint

2011-12-21 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13174483#comment-13174483
 ] 

Zhihong Yu commented on HDFS-2698:
--

Patch looks good to me.

 BackupNode is downloading image from NameNode for every checkpoint
 --

 Key: HDFS-2698
 URL: https://issues.apache.org/jira/browse/HDFS-2698
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.22.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: rollFSImage.patch, rollFSImage.patch


 BackupNode can make periodic checkpoints without downloading image and edits 
 files from the NameNode, but with just saving the namespace to local disks. 
 This is not happening because NN renews checkpoint time after every 
 checkpoint, thus making its image ahead of the BN's even though they are in 
 sync.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira