[jira] [Commented] (HDFS-827) Additional unit tests for FSDataset

2012-04-20 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258785#comment-13258785
 ] 

Uma Maheswara Rao G commented on HDFS-827:
--

I think, we need to re-base this patch against to latest trunk! As we have 
re-factored the FsDataSet impls to separate package.

> Additional unit tests for FSDataset
> ---
>
> Key: HDFS-827
> URL: https://issues.apache.org/jira/browse/HDFS-827
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: data-node, test
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-827.patch, hdfs-827.txt
>
>
> FSDataset doesn't currently have a unit-test that tests it in isolation of 
> the DN or a cluster. A test specifically for this class will be helpful for 
> developing HDFS-788

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3258) Test for HADOOP-8144 (pseudoSortByDistance in NetworkTopology for first rack local node)

2012-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258768#comment-13258768
 ] 

Hadoop QA commented on HDFS-3258:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12523613/HDFS-3258.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2311//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2311//console

This message is automatically generated.

> Test for HADOOP-8144 (pseudoSortByDistance in NetworkTopology for first rack 
> local node)
> 
>
> Key: HDFS-3258
> URL: https://issues.apache.org/jira/browse/HDFS-3258
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.23.0, 1.0.0
>Reporter: Eli Collins
>Assignee: Junping Du
>  Labels: patch, test
> Attachments: HDFS-3258.patch, hdfs-3258.txt
>
>
> For updating TestNetworkTopology to cover HADOOP-8144.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3222) DFSInputStream#openInfo should not silently get the length as 0 when locations length is zero for last partial block.

2012-04-20 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258760#comment-13258760
 ] 

Uma Maheswara Rao G commented on HDFS-3222:
---

@Todd, could you please take a look at this for review?

> DFSInputStream#openInfo should not silently get the length as 0 when 
> locations length is zero for last partial block.
> -
>
> Key: HDFS-3222
> URL: https://issues.apache.org/jira/browse/HDFS-3222
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 1.0.3, 2.0.0, 3.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-3222-Test.patch, HDFS-3222.patch
>
>
> I have seen one situation with Hbase cluster.
> Scenario is as follows:
> 1)1.5 blocks has been written and synced.
> 2)Suddenly cluster has been restarted.
> Reader opened the file and trying to get the length., By this time partial 
> block contained DNs are not reported to NN. So, locations for this partial 
> block would be 0. In this case, DFSInputStream assumes that, 1 block size as 
> final size.
> But reader also assuming that, 1 block size is the final length and setting 
> his end marker. Finally reader ending up reading only partial data. Due to 
> this, HMaster could not replay the complete edits. 
> Actually this happend with 20 version. Looking at the code, same should 
> present in trunk as well.
> {code}
> int replicaNotFoundCount = locatedblock.getLocations().length;
> 
> for(DatanodeInfo datanode : locatedblock.getLocations()) {
> ..
> ..
>  // Namenode told us about these locations, but none know about the replica
> // means that we hit the race between pipeline creation start and end.
> // we require all 3 because some other exception could have happened
> // on a DN that has it.  we want to report that error
> if (replicaNotFoundCount == 0) {
>   return 0;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3258) Test for HADOOP-8144 (pseudoSortByDistance in NetworkTopology for first rack local node)

2012-04-20 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-3258:
-

   Labels: patch test  (was: )
Affects Version/s: 0.23.0
   1.0.0
   Status: Patch Available  (was: Open)

> Test for HADOOP-8144 (pseudoSortByDistance in NetworkTopology for first rack 
> local node)
> 
>
> Key: HDFS-3258
> URL: https://issues.apache.org/jira/browse/HDFS-3258
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 1.0.0, 0.23.0
>Reporter: Eli Collins
>Assignee: Junping Du
>  Labels: patch, test
> Attachments: HDFS-3258.patch, hdfs-3258.txt
>
>
> For updating TestNetworkTopology to cover HADOOP-8144.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3258) Test for HADOOP-8144 (pseudoSortByDistance in NetworkTopology for first rack local node)

2012-04-20 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-3258:
-

Attachment: HDFS-3258.patch

Update the test as Eli's suggestion.

> Test for HADOOP-8144 (pseudoSortByDistance in NetworkTopology for first rack 
> local node)
> 
>
> Key: HDFS-3258
> URL: https://issues.apache.org/jira/browse/HDFS-3258
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Eli Collins
>Assignee: Junping Du
> Attachments: HDFS-3258.patch, hdfs-3258.txt
>
>
> For updating TestNetworkTopology to cover HADOOP-8144.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3311) Httpfs: Cannot build because of wrong pom file

2012-04-20 Thread Show You (JIRA)
Show You created HDFS-3311:
--

 Summary: Httpfs: Cannot build because of wrong pom file
 Key: HDFS-3311
 URL: https://issues.apache.org/jira/browse/HDFS-3311
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Show You


On httpfs master 43f595d77d9d42ade5220bfba55ec13e558afb7a
{code}
hoop]$ mvn clean package site assembly:single
...
[WARNING] Unable to create Maven project from repository.
org.apache.maven.project.InvalidProjectModelException: 1 problem was 
encountered while building the effective model for 
com.sun.jersey:jersey-server:1.4
[FATAL] Non-parseable POM 
/home/yuki/.m2/repository/com/sun/jersey/jersey-project/1.4/jersey-project-1.4.pom:
 end tag name  must match start tag name  from line 5 (position: 
TEXT seen ...\r\n... @6:8)  @ 
/home/yuki/.m2/repository/com/sun/jersey/jersey-project/1.4/jersey-project-1.4.pom,
 line 6, column 8
 for project com.sun.jersey:jersey-server:1.4 for project 
com.sun.jersey:jersey-server:1.4
at 
org.apache.maven.project.DefaultMavenProjectBuilder.transformError(DefaultMavenProjectBuilder.java:193)
at 
...
{code}
and
{code}
hoop]$ cat 
/home/yuki/.m2/repository/com/sun/jersey/jersey-project/1.4/jersey-project-1.4.pom


301 Moved Permanently

301 Moved Permanently
nginx/0.6.39



{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3310) Make sure that we abort when no edit log directories are left

2012-04-20 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3310:
--

Target Version/s: 1.1.0, 1.0.3  (was: 0.20.1)

> Make sure that we abort when no edit log directories are left
> -
>
> Key: HDFS-3310
> URL: https://issues.apache.org/jira/browse/HDFS-3310
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.20.1
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-3310-b1.001.patch
>
>
> We should make sure to abort when there are no edit log directories left to 
> write to.  It seems that there is at least one case that is slipping through 
> the cracks right now in branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3310) Make sure that we abort when no edit log directories are left

2012-04-20 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3310:
---

Attachment: HDFS-3310-b1.001.patch

> Make sure that we abort when no edit log directories are left
> -
>
> Key: HDFS-3310
> URL: https://issues.apache.org/jira/browse/HDFS-3310
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.20.1
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-3310-b1.001.patch
>
>
> We should make sure to abort when there are no edit log directories left to 
> write to.  It seems that there is at least one case that is slipping through 
> the cracks right now in branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3310) Make sure that we abort when no edit log directories are left

2012-04-20 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-3310:
--

 Summary: Make sure that we abort when no edit log directories are 
left
 Key: HDFS-3310
 URL: https://issues.apache.org/jira/browse/HDFS-3310
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.1
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


We should make sure to abort when there are no edit log directories left to 
write to.  It seems that there is at least one case that is slipping through 
the cracks right now in branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258697#comment-13258697
 ] 

Aaron T. Myers commented on HDFS-3308:
--

bq. Checked with Daryn. His patch does take care non-default port since 
NetUtils.createSocketAddr(String target, int defaultPort) does.

Ah, so it does. Looks good, then. Thanks for looking into it.

> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 0.23.3
>
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258683#comment-13258683
 ] 

Hudson commented on HDFS-3308:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #2131 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2131/])
HDFS-3308. Uses canonical URI to select delegation tokens in HftpFileSystem 
and WebHdfsFileSystem.  Contributed by Daryn Sharp (Revision 1328541)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1328541
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpDelegationToken.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 0.23.3
>
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258677#comment-13258677
 ] 

Hudson commented on HDFS-3308:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2189 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2189/])
HDFS-3308. Uses canonical URI to select delegation tokens in HftpFileSystem 
and WebHdfsFileSystem.  Contributed by Daryn Sharp (Revision 1328541)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1328541
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpDelegationToken.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 0.23.3
>
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258675#comment-13258675
 ] 

Hudson commented on HDFS-3308:
--

Integrated in Hadoop-Common-trunk-Commit #2115 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2115/])
HDFS-3308. Uses canonical URI to select delegation tokens in HftpFileSystem 
and WebHdfsFileSystem.  Contributed by Daryn Sharp (Revision 1328541)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1328541
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpDelegationToken.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 0.23.3
>
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3308:
-

   Resolution: Fixed
Fix Version/s: 0.23.3
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Daryn!

> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 0.23.3
>
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258673#comment-13258673
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3308:
--

Checked with Daryn.  His patch does take care non-default port since 
NetUtils.createSocketAddr(String target, int defaultPort) does.

> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3092) Enable journal protocol based editlog streaming for standby namenode

2012-04-20 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258633#comment-13258633
 ] 

Bikas Saha commented on HDFS-3092:
--

@Todd
The definition in the doc for ParallelWritesWithBarrier is deliberately 
shallow. The point was just to differentiate between waiting and not waiting. 
The doc does not go into specifics of algorithms. So your feedback for 
different issues should be directed to the proposal you are commenting on. On 
future improvements - again, the doc is meant to be a comparison of the 
proposals as we saw them in the design docs submitted to the jira's and 
bookkeeper online references.
Basically, going by existing documentation of proposals, the doc tries to 
outline the high level salient points to consider.

@Flavio
Thanks for the roadmap pointer.

> Enable journal protocol based editlog streaming for standby namenode
> 
>
> Key: HDFS-3092
> URL: https://issues.apache.org/jira/browse/HDFS-3092
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, name-node
>Affects Versions: 0.24.0, 0.23.3
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: ComparisonofApproachesforHAJournals.pdf, 
> MultipleSharedJournals.pdf, MultipleSharedJournals.pdf, 
> MultipleSharedJournals.pdf
>
>
> Currently standby namenode relies on reading shared editlogs to stay current 
> with the active namenode, for namespace changes. BackupNode used streaming 
> edits from active namenode for doing the same. This jira is to explore using 
> journal protocol based editlog streams for the standby namenode. A daemon in 
> standby will get the editlogs from the active and write it to local edits. To 
> begin with, the existing standby mechanism of reading from a file, will 
> continue to be used, instead of from shared edits, from the local edits.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3309) HttpFS (Hoop) chmod not supporting octal and sticky bit permissions

2012-04-20 Thread Romain Rigaux (JIRA)
Romain Rigaux created HDFS-3309:
---

 Summary: HttpFS (Hoop) chmod not supporting octal and sticky bit 
permissions
 Key: HDFS-3309
 URL: https://issues.apache.org/jira/browse/HDFS-3309
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.1
Reporter: Romain Rigaux


HttpFs supports only the permissions: [0-7][0-7][0-7]

In order to be compatible with webhdfs in needs to understand octal and sticky 
bit permissions (e.g. 0777, 01777...)

Example of error:
curl -L -X PUT 
"http://localhost:14000/webhdfs/v1/user/romain/test?permission=01777&op=SETPERMISSION&user.name=romain";
 
{"RemoteException":{"message":"java.lang.IllegalArgumentException: Parameter 
[permission], invalid value [01777], value must be 
[default|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])|[0-7][0-7][0-7]]","exception":"QueryParamException","javaClassName":"com.sun.jersey.api.ParamException$QueryParamException"}}

Works with WebHdfs:
curl -L -X PUT 
"http://localhost:50070/webhdfs/v1/user/romain/test?permission=01777&op=SETPERMISSION&user.name=romain";
 
echo $?
0



curl -L -X PUT 
"http://localhost:14000/webhdfs/v1/user/romain/test?permission=99&op=SETPERMISSION&user.name=romain";
 
{"RemoteException":{"message":"java.lang.IllegalArgumentException: Parameter 
[permission], invalid value [99], value must be 
[default|(-[-r][-w][-x][-r][-w][-x][-r][-w][-x])|[0-7][0-7][0-7]]","exception":"QueryParamException","javaClassName":"com.sun.jersey.api.ParamException$QueryParamException"}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3092) Enable journal protocol based editlog streaming for standby namenode

2012-04-20 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258594#comment-13258594
 ] 

Flavio Junqueira commented on HDFS-3092:


Thanks for posting this comparison, Bikas. Let me try to address the last two 
points on bookkeeper:

bq. Tools for recovery - have a bookie recover tool. others?? 

That's correct, we have a bookie recovery tool that reconstructs the ledger 
fragments of a dead bookie. This has been part of bookkeeper for a while. We 
have some other tools proposed in BOOKKEEPER-183 to read and check bookie 
files, but they are not checked in yet. We have yet some other tools we want to 
develop for some more extreme failure scenarios. We are targeting release 4.2.0 
for them (a draft of our feature roadmap is here 
https://cwiki.apache.org/confluence/display/BOOKKEEPER/Roadmap). 

bq. Release frequency, committers, projects that use it??

We started planning for releases every 6 months, but we have been thinking 
about releasing more frequently, every 3 months. 

We are currently 6 committers, but only 3 have been really active. Four of us 
are from Yahoo!, one from Twitter, and one from Facebook. Given that it is 
still a young project, I don't see why other hdfs folks cannot become 
committers of bookkeeper if they contribute and there is interest. It would be 
actually quite natural in the case bookkeeper ends up being used with the 
namenode. For us, having committers from the hdfs community would be useful to 
make sure we don't miss important requirements of yours.

As for projects using it, we have applications that incorporated bookkeeper 
(and hedwig) inside Yahoo! recently, and we have people from other companies on 
the mailing list discussing their setups and asking questions. If you're on the 
list, you have possibly seen those.


> Enable journal protocol based editlog streaming for standby namenode
> 
>
> Key: HDFS-3092
> URL: https://issues.apache.org/jira/browse/HDFS-3092
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, name-node
>Affects Versions: 0.24.0, 0.23.3
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: ComparisonofApproachesforHAJournals.pdf, 
> MultipleSharedJournals.pdf, MultipleSharedJournals.pdf, 
> MultipleSharedJournals.pdf
>
>
> Currently standby namenode relies on reading shared editlogs to stay current 
> with the active namenode, for namespace changes. BackupNode used streaming 
> edits from active namenode for doing the same. This jira is to explore using 
> journal protocol based editlog streams for the standby namenode. A daemon in 
> standby will get the editlogs from the active and write it to local edits. To 
> begin with, the existing standby mechanism of reading from a file, will 
> continue to be used, instead of from shared edits, from the local edits.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258526#comment-13258526
 ] 

Hudson commented on HDFS-3308:
--

Integrated in Hadoop-Common-trunk-Commit #2114 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2114/])
Revert r1328482 for HDFS-3308. (Revision 1328487)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1328487
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpDelegationToken.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258525#comment-13258525
 ] 

Hudson commented on HDFS-3308:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2188 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2188/])
Revert r1328482 for HDFS-3308. (Revision 1328487)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1328487
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpDelegationToken.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258517#comment-13258517
 ] 

Hudson commented on HDFS-3308:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #2129 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2129/])
HDFS-3308. Uses canonical URI to select delegation tokens in HftpFileSystem 
and WebHdfsFileSystem.  Contributed by Daryn Sharp (Revision 1328482)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1328482
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpDelegationToken.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3092) Enable journal protocol based editlog streaming for standby namenode

2012-04-20 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258512#comment-13258512
 ] 

Todd Lipcon commented on HDFS-3092:
---

Can you clarify a few things in this document?

- In ParallelWritesWithBarrier, what happens to the journals which 
timeout/fail? It seems you need to mark them as failed in ZK or something in 
order to be correct. But if you do that, why do you need Q to be a "quorum"? 
Q=1 should suffice for correctness, and Q=2 should suffice in order to always 
be available to recover.

It seems the protocol should be closer to:
1) send out write request to all active JNs
2) wait until all respond, or a configurable timeout
3) any that do not respond are marked as failed in ZK
4) If the remaining number of JNs is sufficient (I'd guess 2) then succeed the 
write. Otherwise fail the write and abort.

The recovery protocol here is also a little tricky. I haven't seen a 
description of the specifics - there are a number of cases to handle - eg even 
if a write appears to fail from the perspective of the writer, it may have 
actually succeeded. Another situation: what happens if the writer crashes 
between step 2 and step 3 (so the JNs have differing number of txns, but ZK 
indicates they're all up to date?) 


Regarding quorum commits:
bq. b. The journal set is fixed in the config. Hard to add/replace hardware.
There are protocols that could be used to change the quorum size/membership at 
runtime. They do add complexity, though, so I think they should be seen as a 
future improvement - but not be discounted as impossible.
Another point is that hardware replacement can easily be treated the same as a 
full crash and loss of disk. If one node completely crashes, a new node could 
be brought in with the same hostname with no complicated protocols.
Adding or removing nodes shouldn't be hard to support during a downtime window, 
which I think satisfies most use cases pretty well.


Regarding bookkeeper:
- other operational concerns aren't mentioned: eg it doesn't use Hadoop 
metrics, doesn't use the same style of configuration files, daemon scripts, 
etc. 

> Enable journal protocol based editlog streaming for standby namenode
> 
>
> Key: HDFS-3092
> URL: https://issues.apache.org/jira/browse/HDFS-3092
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, name-node
>Affects Versions: 0.24.0, 0.23.3
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: ComparisonofApproachesforHAJournals.pdf, 
> MultipleSharedJournals.pdf, MultipleSharedJournals.pdf, 
> MultipleSharedJournals.pdf
>
>
> Currently standby namenode relies on reading shared editlogs to stay current 
> with the active namenode, for namespace changes. BackupNode used streaming 
> edits from active namenode for doing the same. This jira is to explore using 
> journal protocol based editlog streams for the standby namenode. A daemon in 
> standby will get the editlogs from the active and write it to local edits. To 
> begin with, the existing standby mechanism of reading from a file, will 
> continue to be used, instead of from shared edits, from the local edits.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258510#comment-13258510
 ] 

Aaron T. Myers commented on HDFS-3308:
--

bq. Hi Aaron, good catch. I think you are right that the patch won't work when 
default port is not used.

Thanks, thought so. Since it looks like you've already committed this, feel 
free to fix it in a follow-up JIRA, if that's easier.

> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258503#comment-13258503
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3308:
--

Hi Aaron, good catch.  I think you are right that the patch won't work when 
default port is not used.

> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258500#comment-13258500
 ] 

Hudson commented on HDFS-3308:
--

Integrated in Hadoop-Common-trunk-Commit #2113 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2113/])
HDFS-3308. Uses canonical URI to select delegation tokens in HftpFileSystem 
and WebHdfsFileSystem.  Contributed by Daryn Sharp (Revision 1328482)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1328482
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpDelegationToken.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258498#comment-13258498
 ] 

Hudson commented on HDFS-3308:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2187 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2187/])
HDFS-3308. Uses canonical URI to select delegation tokens in HftpFileSystem 
and WebHdfsFileSystem.  Contributed by Daryn Sharp (Revision 1328482)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1328482
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHftpDelegationToken.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3222) DFSInputStream#openInfo should not silently get the length as 0 when locations length is zero for last partial block.

2012-04-20 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258493#comment-13258493
 ] 

Hadoop QA commented on HDFS-3222:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12523528/HDFS-3222.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 1 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2310//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2310//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2310//console

This message is automatically generated.

> DFSInputStream#openInfo should not silently get the length as 0 when 
> locations length is zero for last partial block.
> -
>
> Key: HDFS-3222
> URL: https://issues.apache.org/jira/browse/HDFS-3222
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 1.0.3, 2.0.0, 3.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-3222-Test.patch, HDFS-3222.patch
>
>
> I have seen one situation with Hbase cluster.
> Scenario is as follows:
> 1)1.5 blocks has been written and synced.
> 2)Suddenly cluster has been restarted.
> Reader opened the file and trying to get the length., By this time partial 
> block contained DNs are not reported to NN. So, locations for this partial 
> block would be 0. In this case, DFSInputStream assumes that, 1 block size as 
> final size.
> But reader also assuming that, 1 block size is the final length and setting 
> his end marker. Finally reader ending up reading only partial data. Due to 
> this, HMaster could not replay the complete edits. 
> Actually this happend with 20 version. Looking at the code, same should 
> present in trunk as well.
> {code}
> int replicaNotFoundCount = locatedblock.getLocations().length;
> 
> for(DatanodeInfo datanode : locatedblock.getLocations()) {
> ..
> ..
>  // Namenode told us about these locations, but none know about the replica
> // means that we hit the race between pipeline creation start and end.
> // we require all 3 because some other exception could have happened
> // on a DN that has it.  we want to report that error
> if (replicaNotFoundCount == 0) {
>   return 0;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258484#comment-13258484
 ] 

Aaron T. Myers commented on HDFS-3308:
--

{code}
-this.nnAddr = NetUtils.createSocketAddrForHost(uri.getHost(), 
uri.getPort());
+this.nnAddr = NetUtils.createSocketAddr(uri.getAuthority(), 
getDefaultPort());
{code}

This change concerns me a little bit. It seems like the right thing to do would 
be to use the URI's port if it's present otherwise use the default port. Or am 
I missing something?

> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258477#comment-13258477
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3308:
--

+1 patch looks good.

> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2617) Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution

2012-04-20 Thread Jakob Homan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258445#comment-13258445
 ] 

Jakob Homan commented on HDFS-2617:
---

Sweet, Owen.  Thanks for doing this!


> Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution
> --
>
> Key: HDFS-2617
> URL: https://issues.apache.org/jira/browse/HDFS-2617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-2617-a.patch, HDFS-2617-b.patch
>
>
> The current approach to secure and authenticate nn web services is based on 
> Kerberized SSL and was developed when a SPNEGO solution wasn't available. Now 
> that we have one, we can get rid of the non-standard KSSL and use SPNEGO 
> throughout.  This will simplify setup and configuration.  Also, Kerberized 
> SSL is a non-standard approach with its own quirks and dark corners 
> (HDFS-2386).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2617) Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution

2012-04-20 Thread Owen O'Malley (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HDFS-2617:


Attachment: HDFS-2617-b.patch

I've ported Jakob's patch over to the branch-1.0 line.

> Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution
> --
>
> Key: HDFS-2617
> URL: https://issues.apache.org/jira/browse/HDFS-2617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-2617-a.patch, HDFS-2617-b.patch
>
>
> The current approach to secure and authenticate nn web services is based on 
> Kerberized SSL and was developed when a SPNEGO solution wasn't available. Now 
> that we have one, we can get rid of the non-standard KSSL and use SPNEGO 
> throughout.  This will simplify setup and configuration.  Also, Kerberized 
> SSL is a non-standard approach with its own quirks and dark corners 
> (HDFS-2386).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Robert Joseph Evans (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258438#comment-13258438
 ] 

Robert Joseph Evans commented on HDFS-3308:
---

The changes look fairly simple Tests + getCanonicalUri instead of getUri, and 
adding in the ugi parameter for testing I assume.  +1 (non-binding).

> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3092) Enable journal protocol based editlog streaming for standby namenode

2012-04-20 Thread Bikas Saha (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HDFS-3092:
-

Attachment: ComparisonofApproachesforHAJournals.pdf

Sanjay and I were trying to objectively compare different approaches 
(HDFS-3077, HDFS-3092, BookKeeper). I have attached a document outlining the 
observations.
Hopefully, this will help in structuring the discussions going forward.

> Enable journal protocol based editlog streaming for standby namenode
> 
>
> Key: HDFS-3092
> URL: https://issues.apache.org/jira/browse/HDFS-3092
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, name-node
>Affects Versions: 0.24.0, 0.23.3
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: ComparisonofApproachesforHAJournals.pdf, 
> MultipleSharedJournals.pdf, MultipleSharedJournals.pdf, 
> MultipleSharedJournals.pdf
>
>
> Currently standby namenode relies on reading shared editlogs to stay current 
> with the active namenode, for namespace changes. BackupNode used streaming 
> edits from active namenode for doing the same. This jira is to explore using 
> journal protocol based editlog streams for the standby namenode. A daemon in 
> standby will get the editlogs from the active and write it to local edits. To 
> begin with, the existing standby mechanism of reading from a file, will 
> continue to be used, instead of from shared edits, from the local edits.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3222) DFSInputStream#openInfo should not silently get the length as 0 when locations length is zero for last partial block.

2012-04-20 Thread Uma Maheswara Rao G (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-3222:
--

Target Version/s: 2.0.0, 3.0.0  (was: 3.0.0, 2.0.0)
  Status: Patch Available  (was: Open)

> DFSInputStream#openInfo should not silently get the length as 0 when 
> locations length is zero for last partial block.
> -
>
> Key: HDFS-3222
> URL: https://issues.apache.org/jira/browse/HDFS-3222
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 1.0.3, 2.0.0, 3.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-3222-Test.patch, HDFS-3222.patch
>
>
> I have seen one situation with Hbase cluster.
> Scenario is as follows:
> 1)1.5 blocks has been written and synced.
> 2)Suddenly cluster has been restarted.
> Reader opened the file and trying to get the length., By this time partial 
> block contained DNs are not reported to NN. So, locations for this partial 
> block would be 0. In this case, DFSInputStream assumes that, 1 block size as 
> final size.
> But reader also assuming that, 1 block size is the final length and setting 
> his end marker. Finally reader ending up reading only partial data. Due to 
> this, HMaster could not replay the complete edits. 
> Actually this happend with 20 version. Looking at the code, same should 
> present in trunk as well.
> {code}
> int replicaNotFoundCount = locatedblock.getLocations().length;
> 
> for(DatanodeInfo datanode : locatedblock.getLocations()) {
> ..
> ..
>  // Namenode told us about these locations, but none know about the replica
> // means that we hit the race between pipeline creation start and end.
> // we require all 3 because some other exception could have happened
> // on a DN that has it.  we want to report that error
> if (replicaNotFoundCount == 0) {
>   return 0;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3051) A zero-copy ScatterGatherRead api from FSDataInputStream

2012-04-20 Thread Tim Broberg (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258432#comment-13258432
 ] 

Tim Broberg commented on HDFS-3051:
---

This interface adds some complexity to the ZeroCopyCompressor interface, 
HADOOP-8148. Debugging traversal of a list of objects across JNI is likely to 
take some work.

Are we approaching any kind of consensus on whether to incorporate this or not?

Also, how large are the individual buffers in these lists, typically?

> A zero-copy ScatterGatherRead api from FSDataInputStream
> 
>
> Key: HDFS-3051
> URL: https://issues.apache.org/jira/browse/HDFS-3051
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Reporter: dhruba borthakur
>Assignee: dhruba borthakur
>
> It will be nice if we can get a new API from FSDtaInputStream that allows for 
> zero-copy read for hdfs readers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258427#comment-13258427
 ] 

Hadoop QA commented on HDFS-3308:
-

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12523519/HDFS-3308.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2309//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2309//console

This message is automatically generated.

> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3222) DFSInputStream#openInfo should not silently get the length as 0 when locations length is zero for last partial block.

2012-04-20 Thread Uma Maheswara Rao G (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-3222:
--

Attachment: HDFS-3222.patch

Attached the initial patch with the proposed idea.

> DFSInputStream#openInfo should not silently get the length as 0 when 
> locations length is zero for last partial block.
> -
>
> Key: HDFS-3222
> URL: https://issues.apache.org/jira/browse/HDFS-3222
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 1.0.3, 2.0.0, 3.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-3222-Test.patch, HDFS-3222.patch
>
>
> I have seen one situation with Hbase cluster.
> Scenario is as follows:
> 1)1.5 blocks has been written and synced.
> 2)Suddenly cluster has been restarted.
> Reader opened the file and trying to get the length., By this time partial 
> block contained DNs are not reported to NN. So, locations for this partial 
> block would be 0. In this case, DFSInputStream assumes that, 1 block size as 
> final size.
> But reader also assuming that, 1 block size is the final length and setting 
> his end marker. Finally reader ending up reading only partial data. Due to 
> this, HMaster could not replay the complete edits. 
> Actually this happend with 20 version. Looking at the code, same should 
> present in trunk as well.
> {code}
> int replicaNotFoundCount = locatedblock.getLocations().length;
> 
> for(DatanodeInfo datanode : locatedblock.getLocations()) {
> ..
> ..
>  // Namenode told us about these locations, but none know about the replica
> // means that we hit the race between pipeline creation start and end.
> // we require all 3 because some other exception could have happened
> // on a DN that has it.  we want to report that error
> if (replicaNotFoundCount == 0) {
>   return 0;
> }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3206) Miscellaneous xml cleanups for OEV

2012-04-20 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3206:
--

  Resolution: Fixed
   Fix Version/s: 2.0.0
Target Version/s:   (was: 2.0.0)
  Status: Resolved  (was: Patch Available)

> Miscellaneous xml cleanups for OEV
> --
>
> Key: HDFS-3206
> URL: https://issues.apache.org/jira/browse/HDFS-3206
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.0.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HDFS-3206.001.patch, HDFS-3206.002.patch, 
> HDFS-3206.003.patch, HDFS-3206.004.patch
>
>
> * SetOwner operations can change both the user and group which a file or 
> directory belongs to, or just one of those.  Currently, in the XML 
> serialization/deserialization code, we don't handle the case where just the 
> group is set, not the user.  We should handle this case.
> * consistently serialize generation stamp as GENSTAMP.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3306) fuse_dfs: don't lock release operations

2012-04-20 Thread Colin Patrick McCabe (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258420#comment-13258420
 ] 

Colin Patrick McCabe commented on HDFS-3306:


To test this, I mounted a fuse_dfs filesystem, created a bunch of files, and 
then deleted them.

> fuse_dfs: don't lock release operations
> ---
>
> Key: HDFS-3306
> URL: https://issues.apache.org/jira/browse/HDFS-3306
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-3306.001.patch
>
>
> There's no need to lock release operations in FUSE, because release can only 
> be called once on a fuse_file_info structure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3304) fix fuse_dfs build

2012-04-20 Thread Colin Patrick McCabe (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258419#comment-13258419
 ] 

Colin Patrick McCabe commented on HDFS-3304:


No, that command doesn't work for me.  The problem seems to be that automake 
defaults to creating ${TARGET}/usr/local/lib64 for me, and 
${TARGET}/usr/local/lib for you.  This should be fixable by passing a --libdir 
switch with the desired directory to automake.

> might be worth going straight to HDFS-3251 rather than fixing this.

Makes sense to me!

> fix fuse_dfs build
> --
>
> Key: HDFS-3304
> URL: https://issues.apache.org/jira/browse/HDFS-3304
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Colin Patrick McCabe
>Priority: Minor
>
> The fuse_dfs build is broken in several ways.  If you run:
> {code}
> mvn compile -DskipTests -Pnative
> mvn compile -DskipTests -Pfuse
> {code}
> You get the following error message:
> {code}
> [exec] 
> /usr/lib64/gcc/x86_64-suse-linux/4.6/../../../../x86_64-suse-linux/bin/ld: 
> cannot find -lhdfs
> [exec] collect2: ld returned 1 exit status
> [exec] make[1]: *** [fuse_dfs] Error 1
> [exec] make: *** [all-recursive] Error 1
> {code}
> libhdfs.so was created, but the -Pfuse build doesn't know where it is and 
> can't link against it.
> Also, should ''mvn install -Pfuse'' be copying fuse_dfs somewhere?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Daryn Sharp (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-3308:
--

Status: Patch Available  (was: Open)

> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Daryn Sharp (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-3308:
--

Attachment: HDFS-3308.patch

Uses the canonical uri instead of the uri.  Expanded test cases, a lot.

> hftp/webhdfs can't get tokens if authority has no port
> --
>
> Key: HDFS-3308
> URL: https://issues.apache.org/jira/browse/HDFS-3308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3308.patch
>
>
> Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
> port in the authority.  Building a token service requires a port, and the 
> renewer needs the port.  The default port is not being used when there is no 
> port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3308) hftp/webhdfs can't get tokens if authority has no port

2012-04-20 Thread Daryn Sharp (Created) (JIRA)
hftp/webhdfs can't get tokens if authority has no port
--

 Key: HDFS-3308
 URL: https://issues.apache.org/jira/browse/HDFS-3308
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.23.0, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical


Token acquisition fails if a hftp or webhdfs filesystem is obtained with no 
port in the authority.  Building a token service requires a port, and the 
renewer needs the port.  The default port is not being used when there is no 
port in the uri.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-119) logSync() may block NameNode forever.

2012-04-20 Thread Brandon Li (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-119:


Attachment: HDFS119.branch1.0.patch

backport Konstantin's patch to branch1.0

> logSync() may block NameNode forever.
> -
>
> Key: HDFS-119
> URL: https://issues.apache.org/jira/browse/HDFS-119
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Reporter: Konstantin Shvachko
>Assignee: Suresh Srinivas
> Fix For: 0.21.0, 1.1.0
>
> Attachments: HDFS-119-branch-1.0.patch, HDFS-119-branch-1.0.patch, 
> HDFS-119.patch, HDFS-119.patch, HDFS119.branch1.0.patch
>
>
> # {{FSEditLog.logSync()}} first waits until {{isSyncRunning}} is false and 
> then performs syncing to file streams by calling 
> {{EditLogOutputStream.flush()}}.
> If an exception is thrown after {{isSyncRunning}} is set to {{true}} all 
> threads will always wait on this condition.
> An {{IOException}} may be thrown by {{EditLogOutputStream.setReadyToFlush()}} 
> or a {{RuntimeException}} may be thrown by {{EditLogOutputStream.flush()}} or 
> by {{processIOError()}}.
> # The loop that calls {{eStream.flush()}} for multiple 
> {{EditLogOutputStream}}-s is not synchronized, which means that another 
> thread may encounter an error and modify {{editStreams}} by say calling 
> {{processIOError()}}. Then the iterating process in {{logSync()}} will break 
> with {{IndexOutOfBoundException}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3307) when save FSImage ,HDFS( or SecondaryNameNode or FSImage)can't handle some file whose file name has some special messy code(乱码)

2012-04-20 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258331#comment-13258331
 ] 

Todd Lipcon commented on HDFS-3307:
---

Rather than change the code to not use UTF8, I think we should figure out why 
the UTF8 writeString function is writing the wrong data. Is "乱码" the string 
that causes the problem? I tried to reproduce using this string, but it works 
fine here.

(I did "hadoop fs -put /etc/issue '乱码'", then successfully restarted and catted 
the file)

> when save FSImage  ,HDFS( or  SecondaryNameNode or FSImage)can't handle some 
> file whose file name has some special messy code(乱码)
> -
>
> Key: HDFS-3307
> URL: https://issues.apache.org/jira/browse/HDFS-3307
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.1
> Environment: SUSE LINUX
>Reporter: yixiaohua
> Attachments: FSImage.java
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> this the log information  of the  exception  from the SecondaryNameNode: 
> 2012-03-28 00:48:42,553 ERROR 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: 
> java.io.IOException: Found lease for
>  non-existent file 
> /user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/@???
> ??tor.qzone.qq.com/keypart-00174
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFilesUnderConstruction(FSImage.java:1211)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:959)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:589)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$000(SecondaryNameNode.java:473)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:350)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:314)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:225)
> at java.lang.Thread.run(Thread.java:619)
> this is the log information  about the file from namenode:
> 2012-03-28 00:32:26,528 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=boss,boss 
> ip=/10.131.16.34cmd=create  
> src=/user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174 dst=null
> perm=boss:boss:rw-r--r--
> 2012-03-28 00:37:42,387 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: 
> /user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174. 
> blk_2751836614265659170_184668759
> 2012-03-28 00:37:42,696 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.completeFile: file 
> /user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174 is closed by 
> DFSClient_attempt_201203271849_0016_r_000174_0
> 2012-03-28 00:37:50,315 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=boss,boss 
> ip=/10.131.16.34cmd=rename  
> src=/user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174 
> dst=/user/boss/pgv/fission/task16/split/  @?
> tor.qzone.qq.com/keypart-00174  perm=boss:boss:rw-r--r--
> after check the code that save FSImage,I found there are a problem that maybe 
> a bug of HDFS Code,I past below:
> -this is the saveFSImage method  in  FSImage.java, I make some 
> mark at the problem code
> /**
>* Save the contents of the FS image to the file.
>*/
>   void saveFSImage(File newFile) throws IOException {
> FSNamesystem fsNamesys = FSNamesystem.getFSNamesystem();
> FSDirectory fsDir = fsNamesys.dir;
> long startTime = FSNamesystem.now();
> //
> // Write out data
> //
> DataOutputStream out = new DataOutputStream(
> new BufferedOutputStream(
>  new 
> FileOutputStream(newFile)));
> try {
>   .
> 
>   // save the rest of the nodes
>   saveImage(strbuf, 0, fsDir.rootDir, out);--problem
>   fsNamesys.saveFilesUnderConstruction(out);--problem  
> detail is below
>   strbuf = null;
> } finally {
>   out.close();
> }
> LOG.info("Image fil

[jira] [Commented] (HDFS-3305) GetImageServlet should consider SBN a valid requestor in a secure HA setup

2012-04-20 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258230#comment-13258230
 ] 

Hudson commented on HDFS-3305:
--

Integrated in Hadoop-Mapreduce-trunk #1055 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1055/])
HDFS-3305. GetImageServlet should consider SBN a valid requestor in a 
secure HA setup. Contributed by Aaron T. Myers. (Revision 1328115)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1328115
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java


> GetImageServlet should consider SBN a valid requestor in a secure HA setup
> --
>
> Key: HDFS-3305
> URL: https://issues.apache.org/jira/browse/HDFS-3305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Fix For: 2.0.0
>
> Attachments: HDFS-3305.patch
>
>
> Right now only the NN and 2NN are considered valid requestors. This won't 
> work if the ANN and SBN use distinct principal names.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3305) GetImageServlet should consider SBN a valid requestor in a secure HA setup

2012-04-20 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258209#comment-13258209
 ] 

Hudson commented on HDFS-3305:
--

Integrated in Hadoop-Hdfs-trunk #1020 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1020/])
HDFS-3305. GetImageServlet should consider SBN a valid requestor in a 
secure HA setup. Contributed by Aaron T. Myers. (Revision 1328115)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1328115
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/GetImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetImageServlet.java


> GetImageServlet should consider SBN a valid requestor in a secure HA setup
> --
>
> Key: HDFS-3305
> URL: https://issues.apache.org/jira/browse/HDFS-3305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, name-node
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Fix For: 2.0.0
>
> Attachments: HDFS-3305.patch
>
>
> Right now only the NN and 2NN are considered valid requestors. This won't 
> work if the ANN and SBN use distinct principal names.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HDFS-3307) when save FSImage ,HDFS( or SecondaryNameNode or FSImage)can't handle some file whose file name has some special messy code(乱码)

2012-04-20 Thread yixiaohua (Reopened) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yixiaohua reopened HDFS-3307:
-


> when save FSImage  ,HDFS( or  SecondaryNameNode or FSImage)can't handle some 
> file whose file name has some special messy code(乱码)
> -
>
> Key: HDFS-3307
> URL: https://issues.apache.org/jira/browse/HDFS-3307
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.1
> Environment: SUSE LINUX
>Reporter: yixiaohua
> Attachments: FSImage.java
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> this the log information  of the  exception  from the SecondaryNameNode: 
> 2012-03-28 00:48:42,553 ERROR 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: 
> java.io.IOException: Found lease for
>  non-existent file 
> /user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/@???
> ??tor.qzone.qq.com/keypart-00174
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFilesUnderConstruction(FSImage.java:1211)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:959)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:589)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$000(SecondaryNameNode.java:473)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:350)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:314)
> at 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:225)
> at java.lang.Thread.run(Thread.java:619)
> this is the log information  about the file from namenode:
> 2012-03-28 00:32:26,528 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=boss,boss 
> ip=/10.131.16.34cmd=create  
> src=/user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174 dst=null
> perm=boss:boss:rw-r--r--
> 2012-03-28 00:37:42,387 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: 
> /user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174. 
> blk_2751836614265659170_184668759
> 2012-03-28 00:37:42,696 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.completeFile: file 
> /user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174 is closed by 
> DFSClient_attempt_201203271849_0016_r_000174_0
> 2012-03-28 00:37:50,315 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=boss,boss 
> ip=/10.131.16.34cmd=rename  
> src=/user/boss/pgv/fission/task16/split/_temporary/_attempt_201203271849_0016_r_000174_0/
>   @?tor.qzone.qq.com/keypart-00174 
> dst=/user/boss/pgv/fission/task16/split/  @?
> tor.qzone.qq.com/keypart-00174  perm=boss:boss:rw-r--r--
> after check the code that save FSImage,I found there are a problem that maybe 
> a bug of HDFS Code,I past below:
> -this is the saveFSImage method  in  FSImage.java, I make some 
> mark at the problem code
> /**
>* Save the contents of the FS image to the file.
>*/
>   void saveFSImage(File newFile) throws IOException {
> FSNamesystem fsNamesys = FSNamesystem.getFSNamesystem();
> FSDirectory fsDir = fsNamesys.dir;
> long startTime = FSNamesystem.now();
> //
> // Write out data
> //
> DataOutputStream out = new DataOutputStream(
> new BufferedOutputStream(
>  new 
> FileOutputStream(newFile)));
> try {
>   .
> 
>   // save the rest of the nodes
>   saveImage(strbuf, 0, fsDir.rootDir, out);--problem
>   fsNamesys.saveFilesUnderConstruction(out);--problem  
> detail is below
>   strbuf = null;
> } finally {
>   out.close();
> }
> LOG.info("Image file of size " + newFile.length() + " saved in " 
> + (FSNamesystem.now() - startTime)/1000 + " seconds.");
>   }
>  /**
>* Save file tree image starting from the given root.
>* This is a recursive procedure, which first saves all children of
>* a current directory and then moves inside the sub-directories.
>*/
>   private static void saveImage(ByteBuffer parentPrefix,
>