[jira] [Updated] (HDFS-4306) PBHelper.convertLocatedBlock miss convert BlockToken

2013-01-08 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-4306:


Attachment: HDFS-4306.v4.patch

I manually test the failed test locally, it passed, resubmit to confirm. 

> PBHelper.convertLocatedBlock miss convert BlockToken
> 
>
> Key: HDFS-4306
> URL: https://issues.apache.org/jira/browse/HDFS-4306
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-4306.patch, HDFS-4306.v2.patch, HDFS-4306.v3.patch, 
> HDFS-4306.v4.patch, HDFS-4306.v4.patch
>
>
> PBHelper.convertLocatedBlock(from protobuf array to primitive array) miss 
> convert BlockToken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4340) Update addBlock() to inculde inode id as additional argument

2013-01-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13546729#comment-13546729
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4340:
--

- The DFSOutputStream constructor for append does not set fileId.  It would be 
great if we can change fileId to final.  However, it may need some refactoring 
of the constructors.

- The DFSOutputStream constructor for create should not have an additional RPC 
call to namenode.getFileInfo(src).  Change create(..) to return fileId (or 
HdfsFileStatus.)

- With fileId, the source path is no longer needed for 
ClientProtocol.addBlock(..).  Howw about using the src String parameter to pass 
fileId?  The server side can determine whether it is a path or file id by the 
leading "/".  This may be a wild suggestion.  Please see if you think it is a 
good idea.

- Need to change WebHDFS to support file ID.  This may be done in a separated 
JIRA.

- Before changing WebHDFS, add file_id to the other HdfsFileStatus constructor 
and then pass GRANDFATHER_INODE_ID in the caller.

- Similarly, add file_id to HdfsLocatedFileStatus constructor.  Then you don't 
have to add a new constructor.

- It should not pass GRANDFATHER_INODE_ID in checkLease(..).  The file id 
should be availabe for all cases.  We need to add fileId to other RPC methods 
such as abandonBlock(..), complete(..), etc.  Are you plan to do it separately?

- Why change SequentialNumber to public?



> Update addBlock() to inculde inode id as additional argument
> 
>
> Key: HDFS-4340
> URL: https://issues.apache.org/jira/browse/HDFS-4340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-4340.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4306) PBHelper.convertLocatedBlock miss convert BlockToken

2013-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13546749#comment-13546749
 ] 

Hadoop QA commented on HDFS-4306:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563718/HDFS-4306.v4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3792//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3792//console

This message is automatically generated.

> PBHelper.convertLocatedBlock miss convert BlockToken
> 
>
> Key: HDFS-4306
> URL: https://issues.apache.org/jira/browse/HDFS-4306
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-4306.patch, HDFS-4306.v2.patch, HDFS-4306.v3.patch, 
> HDFS-4306.v4.patch, HDFS-4306.v4.patch
>
>
> PBHelper.convertLocatedBlock(from protobuf array to primitive array) miss 
> convert BlockToken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4244) Support deleting snapshots

2013-01-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13546752#comment-13546752
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4244:
--

- Deleting snapshot file may lead to block deletion so that 
removeSelfFromCircle() is not enough.  It should use 
collectSubtreeBlocksAndClear(..).

- It no longer needs to add public to INodesInPath.isSnapshot().

- Both createSnapshot and deleteSnapshot has parameters snapshotName and then 
path, but renameSnapshot has path and then the old and the new names.  How 
about changing createSnapshot and deleteSnapshot to have path and then 
snapshotName?


> Support deleting snapshots
> --
>
> Key: HDFS-4244
> URL: https://issues.apache.org/jira/browse/HDFS-4244
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4244.001.patch, HDFS-4244.002.patch, 
> HDFS-4244.003.patch, HDFS-4244.004.patch, HDFS-4244.005.patch, 
> HDFS-4244.006.patch
>
>
> Provide functionality to delete a snapshot, given the name of the snapshot 
> and the path to the directory where the snapshot was taken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13546772#comment-13546772
 ] 

Hudson commented on HDFS-3970:
--

Integrated in Hadoop-Yarn-trunk #90 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/90/])
HDFS-3970. Fix bug causing rollback of HDFS upgrade to result in bad 
VERSION file. Contributed by Vinay and Andrew Wang. (Revision 1430037)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430037
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java


> BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
> of DataStorage to read prev version file.
> ---
>
> Key: HDFS-3970
> URL: https://issues.apache.org/jira/browse/HDFS-3970
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Vinay
>Assignee: Andrew Wang
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-3970-1.patch, HDFS-3970.patch
>
>
> {code}// read attributes out of the VERSION file of previous directory
> DataStorage prevInfo = new DataStorage();
> prevInfo.readPreviousVersionProperties(bpSd);{code}
> In the above code snippet BlockPoolSliceStorage instance should be used. 
> other wise rollback results in 'storageType' property missing which will not 
> be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4362) GetDelegationTokenResponseProto does not handle null token

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13546778#comment-13546778
 ] 

Hudson commented on HDFS-4362:
--

Integrated in Hadoop-Yarn-trunk #90 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/90/])
HDFS-4362. GetDelegationTokenResponseProto does not handle null token. 
Contributed by Suresh Srinivas. (Revision 1430137)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430137
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto


> GetDelegationTokenResponseProto does not handle null token
> --
>
> Key: HDFS-4362
> URL: https://issues.apache.org/jira/browse/HDFS-4362
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-4362.patch
>
>
> While working on HADOOP-9173, I notice that the 
> GetDelegationTokenResponseProto declares the token field as required. However 
> return of null token is to be expected both as defined in 
> FileSystem#getDelegationToken() and also based on HDFS implementation. This 
> jira intends to make the field as optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13546864#comment-13546864
 ] 

Hudson commented on HDFS-3970:
--

Integrated in Hadoop-Hdfs-trunk #1279 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1279/])
HDFS-3970. Fix bug causing rollback of HDFS upgrade to result in bad 
VERSION file. Contributed by Vinay and Andrew Wang. (Revision 1430037)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430037
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java


> BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
> of DataStorage to read prev version file.
> ---
>
> Key: HDFS-3970
> URL: https://issues.apache.org/jira/browse/HDFS-3970
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Vinay
>Assignee: Andrew Wang
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-3970-1.patch, HDFS-3970.patch
>
>
> {code}// read attributes out of the VERSION file of previous directory
> DataStorage prevInfo = new DataStorage();
> prevInfo.readPreviousVersionProperties(bpSd);{code}
> In the above code snippet BlockPoolSliceStorage instance should be used. 
> other wise rollback results in 'storageType' property missing which will not 
> be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4362) GetDelegationTokenResponseProto does not handle null token

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13546867#comment-13546867
 ] 

Hudson commented on HDFS-4362:
--

Integrated in Hadoop-Hdfs-trunk #1279 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1279/])
HDFS-4362. GetDelegationTokenResponseProto does not handle null token. 
Contributed by Suresh Srinivas. (Revision 1430137)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430137
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto


> GetDelegationTokenResponseProto does not handle null token
> --
>
> Key: HDFS-4362
> URL: https://issues.apache.org/jira/browse/HDFS-4362
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-4362.patch
>
>
> While working on HADOOP-9173, I notice that the 
> GetDelegationTokenResponseProto declares the token field as required. However 
> return of null token is to be expected both as defined in 
> FileSystem#getDelegationToken() and also based on HDFS implementation. This 
> jira intends to make the field as optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13546899#comment-13546899
 ] 

Hudson commented on HDFS-3970:
--

Integrated in Hadoop-Mapreduce-trunk #1307 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1307/])
HDFS-3970. Fix bug causing rollback of HDFS upgrade to result in bad 
VERSION file. Contributed by Vinay and Andrew Wang. (Revision 1430037)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430037
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java


> BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
> of DataStorage to read prev version file.
> ---
>
> Key: HDFS-3970
> URL: https://issues.apache.org/jira/browse/HDFS-3970
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Vinay
>Assignee: Andrew Wang
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-3970-1.patch, HDFS-3970.patch
>
>
> {code}// read attributes out of the VERSION file of previous directory
> DataStorage prevInfo = new DataStorage();
> prevInfo.readPreviousVersionProperties(bpSd);{code}
> In the above code snippet BlockPoolSliceStorage instance should be used. 
> other wise rollback results in 'storageType' property missing which will not 
> be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4351) Fix BlockPlacementPolicyDefault#chooseTarget when avoiding stale nodes

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13546905#comment-13546905
 ] 

Hudson commented on HDFS-4351:
--

Integrated in Hadoop-Mapreduce-trunk #1307 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1307/])
HDFS-4351.  In BlockPlacementPolicyDefault.chooseTarget(..), numOfReplicas 
needs to be updated when avoiding stale nodes.  Contributed by Andrew Wang 
(Revision 1429653)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1429653
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java


> Fix BlockPlacementPolicyDefault#chooseTarget when avoiding stale nodes
> --
>
> Key: HDFS-4351
> URL: https://issues.apache.org/jira/browse/HDFS-4351
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 1.2.0, 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 1.2.0, 2.0.3-alpha
>
> Attachments: hdfs-4351-2.patch, hdfs-4351-3.patch, hdfs-4351-4.patch, 
> hdfs-4351-branch-1-1.patch, hdfs-4351.patch
>
>
> There's a bug in {{BlockPlacementPolicyDefault#chooseTarget}} with stale node 
> avoidance enabled (HDFS-3912). If a NotEnoughReplicasException is thrown in 
> the call to {{chooseRandom()}}, {{numOfReplicas}} is not updated together 
> with the partial result in {{result}} since it is pass by value. The retry 
> call to {{chooseTarget}} then uses this incorrect value.
> This can be seen if you enable stale node detection for 
> {{TestReplicationPolicy#testChooseTargetWithMoreThanAvaiableNodes()}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4362) GetDelegationTokenResponseProto does not handle null token

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13546906#comment-13546906
 ] 

Hudson commented on HDFS-4362:
--

Integrated in Hadoop-Mapreduce-trunk #1307 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1307/])
HDFS-4362. GetDelegationTokenResponseProto does not handle null token. 
Contributed by Suresh Srinivas. (Revision 1430137)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430137
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto


> GetDelegationTokenResponseProto does not handle null token
> --
>
> Key: HDFS-4362
> URL: https://issues.apache.org/jira/browse/HDFS-4362
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-4362.patch
>
>
> While working on HADOOP-9173, I notice that the 
> GetDelegationTokenResponseProto declares the token field as required. However 
> return of null token is to be expected both as defined in 
> FileSystem#getDelegationToken() and also based on HDFS implementation. This 
> jira intends to make the field as optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3272) Make it possible to state MIME type for a webhdfs OPEN operation's result

2013-01-08 Thread Jeff Markham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Markham updated HDFS-3272:
---

Attachment: HDFS-3272.patch

Java, javadoc, and documentation patch.

> Make it possible to state MIME type for a webhdfs OPEN operation's result
> -
>
> Key: HDFS-3272
> URL: https://issues.apache.org/jira/browse/HDFS-3272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 1.0.1
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HDFS-3272.patch
>
>
> when you do a GET from the browser with webhdfs, you get the file, but it 
> comes over as a binary as the browser doesn't know what type it is. Having a 
> mime mapping table and such like would be one solution, but another is simply 
> to add a {{mime}} query parameter that would provide a string to be reflected 
> back to the caller as the Content-Type header in the HTTP response.
> e.g.
> {code}
> http://ranier:50070/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> would generate a 307 redirect to the datanode, with the 
> {code}
> http://dn1:50075/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> which would then generate the result
> {code}
> 200 OK
> Content-Type:text/csv
> GATE4,eb8bd736445f415e18886ba037f84829,55000,2007-01-14,14:01:54,
> GATE4,ec58edcce1049fa665446dc1fa690638,8030803000,2007-01-14,13:52:31,
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3272) Make it possible to state MIME type for a webhdfs OPEN operation's result

2013-01-08 Thread Jeff Markham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Markham updated HDFS-3272:
---

Status: Patch Available  (was: Open)

Submitted patch to add MIME types to HttpFSServer with default MIME type being 
the existing application/octet-stream.

> Make it possible to state MIME type for a webhdfs OPEN operation's result
> -
>
> Key: HDFS-3272
> URL: https://issues.apache.org/jira/browse/HDFS-3272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 1.0.1
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HDFS-3272.patch
>
>
> when you do a GET from the browser with webhdfs, you get the file, but it 
> comes over as a binary as the browser doesn't know what type it is. Having a 
> mime mapping table and such like would be one solution, but another is simply 
> to add a {{mime}} query parameter that would provide a string to be reflected 
> back to the caller as the Content-Type header in the HTTP response.
> e.g.
> {code}
> http://ranier:50070/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> would generate a 307 redirect to the datanode, with the 
> {code}
> http://dn1:50075/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> which would then generate the result
> {code}
> 200 OK
> Content-Type:text/csv
> GATE4,eb8bd736445f415e18886ba037f84829,55000,2007-01-14,14:01:54,
> GATE4,ec58edcce1049fa665446dc1fa690638,8030803000,2007-01-14,13:52:31,
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3272) Make it possible to state MIME type for a webhdfs OPEN operation's result

2013-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13546967#comment-13546967
 ] 

Hadoop QA commented on HDFS-3272:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563768/HDFS-3272.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3793//console

This message is automatically generated.

> Make it possible to state MIME type for a webhdfs OPEN operation's result
> -
>
> Key: HDFS-3272
> URL: https://issues.apache.org/jira/browse/HDFS-3272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 1.0.1
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HDFS-3272.patch
>
>
> when you do a GET from the browser with webhdfs, you get the file, but it 
> comes over as a binary as the browser doesn't know what type it is. Having a 
> mime mapping table and such like would be one solution, but another is simply 
> to add a {{mime}} query parameter that would provide a string to be reflected 
> back to the caller as the Content-Type header in the HTTP response.
> e.g.
> {code}
> http://ranier:50070/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> would generate a 307 redirect to the datanode, with the 
> {code}
> http://dn1:50075/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> which would then generate the result
> {code}
> 200 OK
> Content-Type:text/csv
> GATE4,eb8bd736445f415e18886ba037f84829,55000,2007-01-14,14:01:54,
> GATE4,ec58edcce1049fa665446dc1fa690638,8030803000,2007-01-14,13:52:31,
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3374) hdfs' TestDelegationToken fails intermittently with a race condition

2013-01-08 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547024#comment-13547024
 ] 

Todd Lipcon commented on HDFS-3374:
---

This is still only in branch-1 and not in trunk. Any plans to forward port?

Also, jcarder noticed that this added a lock order inversion:
- FSNamesystem.saveNamespace (holding FSN lock) calls 
DTSM.saveSecretManagerState (which takes DTSM lock)
- ExpiredTokenRemover.run (holding DTSM lock) calls rollMasterKey calls 
updateCurrentKey calls logUpdateMasterKey which takes FSN lock

So if there is a concurrent saveNamespace at the same tie as the expired token 
remover runs, it might make the NN deadlock.


> hdfs' TestDelegationToken fails intermittently with a race condition
> 
>
> Key: HDFS-3374
> URL: https://issues.apache.org/jira/browse/HDFS-3374
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 1.0.3
>
> Attachments: HDFS-3374-branch-1.0.patch, hdfs-3374.patch, 
> HDFS-3374.patch
>
>
> The testcase is failing because the MiniDFSCluster is shutdown before the 
> secret manager can change the key, which calls system.exit with no edit 
> streams available.
> {code}
> [junit] 2012-05-04 15:03:51,521 WARN  common.Storage 
> (FSImage.java:updateRemovedDirs(224)) - Removing storage dir 
> /home/horton/src/hadoop/build/test/data/dfs/name1
> [junit] 2012-05-04 15:03:51,522 FATAL namenode.FSNamesystem 
> (FSEditLog.java:fatalExit(388)) - No edit streams are accessible
> [junit] java.lang.Exception: No edit streams are accessible
> [junit] at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.fatalExit(FSEditLog.java:388)
> [junit] at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.exitIfNoStreams(FSEditLog.java:407)
> [junit] at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.removeEditsAndStorageDir(FSEditLog.java:432)
> [junit] at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.removeEditsStreamsAndStorageDirs(FSEditLog.java:468)
> [junit] at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:1028)
> [junit] at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.logUpdateMasterKey(FSNamesystem.java:5641)
> [junit] at 
> org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager.logUpdateMasterKey(DelegationTokenSecretManager.java:286)
> [junit] at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.updateCurrentKey(AbstractDelegationTokenSecretManager.java:150)
> [junit] at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.rollMasterKey(AbstractDelegationTokenSecretManager.java:174)
> [junit] at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover.run(AbstractDelegationTokenSecretManager.java:385)
> [junit] at java.lang.Thread.run(Thread.java:662)
> [junit] Running org.apache.hadoop.hdfs.security.TestDelegationToken
> [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
> [junit] Test org.apache.hadoop.hdfs.security.TestDelegationToken FAILED 
> (crashed)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4363) Combine PBHelper and HdfsProtoUtil and remove redundant methods

2013-01-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547030#comment-13547030
 ] 

Suresh Srinivas commented on HDFS-4363:
---

Nicholas, looks like I might have accidentally formatted PBHelper.java. I will 
revert those changes. Will address the other comments as well.

> Combine PBHelper and HdfsProtoUtil and remove redundant methods
> ---
>
> Key: HDFS-4363
> URL: https://issues.apache.org/jira/browse/HDFS-4363
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: HDFS-4363.patch, HDFS-4363.patch
>
>
> There are many methods overlapping between PBHelper and HdfsProtoUtil. This 
> jira combines these two helper classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547034#comment-13547034
 ] 

Junping Du commented on HDFS-4261:
--

Hi Eli, with v7 patch, TestBalancerWithNodeGroup can always be successful on my 
local env, and I cannot reproduce ATM and Chris' issue (I tried 30+ times on my 
env already). I think at least 4 issues are identified and fixed for balancer 
here:
1. NoChangeIterations (for counting iteration of no block movement) is not 
working before. Comparing with branch-1, it seems to be involved by Namenode 
Federation.
2. balancer's Balancing policy is static so we need to cleanup (reset) in every 
iteration of balancing although we create a new balancer instance.
3. checkReplicaPlacementPolicy() issue which is identified by ATM.
4. the loop in dispatchBlocks() could be infinite in some occasional cases.
+1 on adding timeout annotation, I will add it in v8 patch.

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-08 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-4261:
-

Attachment: HDFS-4261-v8.patch

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547044#comment-13547044
 ] 

Junping Du commented on HDFS-4261:
--

Chris, can you help to verify it again in your env? If issue only happen on 
specific platform, I think we can file a separated jira to track this as above 
3# is a blocking issue.

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4363) Combine PBHelper and HdfsProtoUtil and remove redundant methods

2013-01-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4363:
--

Attachment: HDFS-4363.patch

I reverted the formatting changes. Except for replacing switch statement 
command, which I plan to do in a separate jira, other changes are incorporated.

Nicholas, sorry for making you review unnecessarily large patch, due to eclipse 
auto-formatting :-)

> Combine PBHelper and HdfsProtoUtil and remove redundant methods
> ---
>
> Key: HDFS-4363
> URL: https://issues.apache.org/jira/browse/HDFS-4363
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: HDFS-4363.patch, HDFS-4363.patch, HDFS-4363.patch
>
>
> There are many methods overlapping between PBHelper and HdfsProtoUtil. This 
> jira combines these two helper classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4365) Add junit timeout to TestBalancerWithNodeGroup

2013-01-08 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-4365:
--

 Summary: Add junit timeout to TestBalancerWithNodeGroup
 Key: HDFS-4365
 URL: https://issues.apache.org/jira/browse/HDFS-4365
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Trivial
 Attachments: HDFS-4365.001.patch

TestBalancerWithNodeGroup should have a junit timeout so that when it fails, we 
can easily identify it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4365) Add junit timeout to TestBalancerWithNodeGroup

2013-01-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4365:
---

Attachment: HDFS-4365.001.patch

> Add junit timeout to TestBalancerWithNodeGroup
> --
>
> Key: HDFS-4365
> URL: https://issues.apache.org/jira/browse/HDFS-4365
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Trivial
> Attachments: HDFS-4365.001.patch
>
>
> TestBalancerWithNodeGroup should have a junit timeout so that when it fails, 
> we can easily identify it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4365) Add junit timeout to TestBalancerWithNodeGroup

2013-01-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4365:
---

Status: Patch Available  (was: Open)

> Add junit timeout to TestBalancerWithNodeGroup
> --
>
> Key: HDFS-4365
> URL: https://issues.apache.org/jira/browse/HDFS-4365
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Trivial
> Attachments: HDFS-4365.001.patch
>
>
> TestBalancerWithNodeGroup should have a junit timeout so that when it fails, 
> we can easily identify it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-08 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4350:
--

Attachment: hdfs-4350-3.patch

Thanks for the review Todd, think I hit all your feedback. Config strings are 
still unfortunately hardcoded in the javadoc comments, but I don't know how to 
avoid that.

> Make enabling of stale marking on read and write paths independent
> --
>
> Key: HDFS-4350
> URL: https://issues.apache.org/jira/browse/HDFS-4350
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-4350-1.patch, hdfs-4350-2.patch, hdfs-4350-3.patch
>
>
> Marking of datanodes as stale for the read and write path was introduced in 
> HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
> {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
> {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
> exists a dependency, since you cannot enable write marking without also 
> enabling read marking, since the first key enables both checking of staleness 
> and read marking.
> I propose renaming the first key to 
> {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
> if either of the keys are set. This will allow read and write marking to be 
> enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-08 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547111#comment-13547111
 ] 

Jing Zhao commented on HDFS-4350:
-

bq. Config strings are still unfortunately hardcoded in the javadoc comments

How about using {@link DFSConfigKeys#_KEY}?

> Make enabling of stale marking on read and write paths independent
> --
>
> Key: HDFS-4350
> URL: https://issues.apache.org/jira/browse/HDFS-4350
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-4350-1.patch, hdfs-4350-2.patch, hdfs-4350-3.patch
>
>
> Marking of datanodes as stale for the read and write path was introduced in 
> HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
> {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
> {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
> exists a dependency, since you cannot enable write marking without also 
> enabling read marking, since the first key enables both checking of staleness 
> and read marking.
> I propose renaming the first key to 
> {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
> if either of the keys are set. This will allow read and write marking to be 
> enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3272) Make it possible to state MIME type for a webhdfs OPEN operation's result

2013-01-08 Thread Jeff Markham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Markham updated HDFS-3272:
---

Affects Version/s: 2.0.2-alpha

> Make it possible to state MIME type for a webhdfs OPEN operation's result
> -
>
> Key: HDFS-3272
> URL: https://issues.apache.org/jira/browse/HDFS-3272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 1.0.1, 2.0.2-alpha
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HDFS-3272.patch
>
>
> when you do a GET from the browser with webhdfs, you get the file, but it 
> comes over as a binary as the browser doesn't know what type it is. Having a 
> mime mapping table and such like would be one solution, but another is simply 
> to add a {{mime}} query parameter that would provide a string to be reflected 
> back to the caller as the Content-Type header in the HTTP response.
> e.g.
> {code}
> http://ranier:50070/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> would generate a 307 redirect to the datanode, with the 
> {code}
> http://dn1:50075/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> which would then generate the result
> {code}
> 200 OK
> Content-Type:text/csv
> GATE4,eb8bd736445f415e18886ba037f84829,55000,2007-01-14,14:01:54,
> GATE4,ec58edcce1049fa665446dc1fa690638,8030803000,2007-01-14,13:52:31,
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3272) Make it possible to state MIME type for a webhdfs OPEN operation's result

2013-01-08 Thread Jeff Markham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Markham updated HDFS-3272:
---

Attachment: (was: HDFS-3272.patch)

> Make it possible to state MIME type for a webhdfs OPEN operation's result
> -
>
> Key: HDFS-3272
> URL: https://issues.apache.org/jira/browse/HDFS-3272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 1.0.1, 2.0.2-alpha
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HDFS-3272.patch
>
>
> when you do a GET from the browser with webhdfs, you get the file, but it 
> comes over as a binary as the browser doesn't know what type it is. Having a 
> mime mapping table and such like would be one solution, but another is simply 
> to add a {{mime}} query parameter that would provide a string to be reflected 
> back to the caller as the Content-Type header in the HTTP response.
> e.g.
> {code}
> http://ranier:50070/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> would generate a 307 redirect to the datanode, with the 
> {code}
> http://dn1:50075/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> which would then generate the result
> {code}
> 200 OK
> Content-Type:text/csv
> GATE4,eb8bd736445f415e18886ba037f84829,55000,2007-01-14,14:01:54,
> GATE4,ec58edcce1049fa665446dc1fa690638,8030803000,2007-01-14,13:52:31,
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3272) Make it possible to state MIME type for a webhdfs OPEN operation's result

2013-01-08 Thread Jeff Markham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Markham updated HDFS-3272:
---

Attachment: HDFS-3272.patch

> Make it possible to state MIME type for a webhdfs OPEN operation's result
> -
>
> Key: HDFS-3272
> URL: https://issues.apache.org/jira/browse/HDFS-3272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 1.0.1, 2.0.2-alpha
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HDFS-3272.patch
>
>
> when you do a GET from the browser with webhdfs, you get the file, but it 
> comes over as a binary as the browser doesn't know what type it is. Having a 
> mime mapping table and such like would be one solution, but another is simply 
> to add a {{mime}} query parameter that would provide a string to be reflected 
> back to the caller as the Content-Type header in the HTTP response.
> e.g.
> {code}
> http://ranier:50070/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> would generate a 307 redirect to the datanode, with the 
> {code}
> http://dn1:50075/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> which would then generate the result
> {code}
> 200 OK
> Content-Type:text/csv
> GATE4,eb8bd736445f415e18886ba037f84829,55000,2007-01-14,14:01:54,
> GATE4,ec58edcce1049fa665446dc1fa690638,8030803000,2007-01-14,13:52:31,
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547143#comment-13547143
 ] 

Hadoop QA commented on HDFS-4261:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563782/HDFS-4261-v8.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3794//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3794//console

This message is automatically generated.

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4032) Specify the charset explicitly rather than rely on the default

2013-01-08 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547169#comment-13547169
 ] 

Eli Collins commented on HDFS-4032:
---

I don't think this is an issue because we don't run on jvm's that don't support 
UTF8 (hence the assertion). Make sense?

> Specify the charset explicitly rather than rely on the default
> --
>
> Key: HDFS-4032
> URL: https://issues.apache.org/jira/browse/HDFS-4032
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-4032.txt
>
>
> Findbugs 2 warns about relying on the default Java charset instead of 
> specifying it explicitly. Given that we're porting Hadoop to different 
> platforms it's better to be explicit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547176#comment-13547176
 ] 

Chris Nauroth commented on HDFS-4261:
-

+1 for the v8 patch.

I tested it on Windows, and I couldn't repro the infinite loop this time.  I 
don't know that it's completely resolved, but it's certainly passing more 
consistently than current trunk.


> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4030) BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should be AtomicLongs

2013-01-08 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547177#comment-13547177
 ] 

Eli Collins commented on HDFS-4030:
---

Thanks ATM. I've committed this to trunk. Leaving this open until I can merge 
to branch-2 (which is currently broken).

> BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should 
> be AtomicLongs
> --
>
> Key: HDFS-4030
> URL: https://issues.apache.org/jira/browse/HDFS-4030
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-4030.txt, hdfs-4030.txt
>
>
> The BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount 
> fields are currently volatile longs which are incremented, which isn't thread 
> safe. It looks like they're always incremented on paths that hold the NN 
> write lock but it would be easier and less error prone for future changes if 
> we made them AtomicLongs. The other volatile long members are just set in one 
> thread and read in another so they're fine as is.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4253) block replica reads get hot-spots due to NetworkTopology#pseudoSortByDistance

2013-01-08 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson updated HDFS-4253:


Attachment: hdfs4253-6.txt

Attaching new patch -6 with improved comment.

> block replica reads get hot-spots due to NetworkTopology#pseudoSortByDistance
> -
>
> Key: HDFS-4253
> URL: https://issues.apache.org/jira/browse/HDFS-4253
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Attachments: hdfs4253-1.txt, hdfs4253-2.txt, hdfs4253-3.txt, 
> hdfs4253-4.txt, hdfs4253-5.txt, hdfs4253-6.txt, hdfs4253.txt
>
>
> When many nodes (10) read from the same block simultaneously, we get 
> asymmetric distribution of read load.  This can result in slow block reads 
> when one replica is serving most of the readers and the other replicas are 
> idle.  The busy DN bottlenecks on its network link.
> This is especially visible with large block sizes and high replica counts (I 
> reproduced the problem with {{-Ddfs.block.size=4294967296}} and replication 
> 5), but the same behavior happens on a small scale with normal-sized blocks 
> and replication=3.
> The root of the problem is in {{NetworkTopology#pseudoSortByDistance}} which 
> explicitly does not try to spread traffic among replicas in a given rack -- 
> it only randomizes usage for off-rack replicas.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4031) Update findbugsExcludeFile.xml to include findbugs 2 exclusions

2013-01-08 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547187#comment-13547187
 ] 

Eli Collins commented on HDFS-4031:
---

Thanks ATM. I've committed this to trunk. Leaving this open until I can merge 
to branch-2 (which is currently broken).


> Update findbugsExcludeFile.xml to include findbugs 2 exclusions
> ---
>
> Key: HDFS-4031
> URL: https://issues.apache.org/jira/browse/HDFS-4031
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-4031.txt
>
>
> Findbugs 2 warns about some volatile increments (VO_VOLATILE_INCREMENT) that 
> unlike HDFS-4029 and HDFS-4030 are less problematic:
> - numFailedVolumes is only incremented in one thread and that access is 
> synchronized
> - pendingReceivedRequests in BPServiceActor is clearly synchronized
> It would be reasonable to make these Atomics as well but I think they're uses 
> are clearly correct so figured for these the warning was more obviously bogus 
> and so could be ignored.
> There's also a SE_BAD_FIELD_INNER_CLASS warning (LocalDatanodeInfo's 
> anonymous class is serializable but it is not) in BPServiceActor is OK to 
> ignore since we don't serialize LocalDatanodeInfo.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4033) Miscellaneous findbugs 2 fixes

2013-01-08 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547199#comment-13547199
 ] 

Eli Collins commented on HDFS-4033:
---

Test failure is unrelated (confirmed running locally for sanity).

Thanks for the review ATM. I've committed this to trunk. Leaving this open 
until I can merge to branch-2 (which is currently broken).


> Miscellaneous findbugs 2 fixes
> --
>
> Key: HDFS-4033
> URL: https://issues.apache.org/jira/browse/HDFS-4033
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-4033.txt, hdfs-4033.txt
>
>
> Fix some miscellaneous findbugs 2 warnings:
> - Switch statements missing default cases
> - Using \n instead of %n in format methods
> - A socket close that should use IOUtils#closeSocket that we missed
> - A use of SimpleDateFormat that is not threadsafe
> - In ReplicaInputStreams it's not clear that we always close the streams we 
> allocate, moving the stream creation into the class where we close them makes 
> that more obvious
> - A couple missing null checks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4365) Add junit timeout to TestBalancerWithNodeGroup

2013-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547209#comment-13547209
 ] 

Hadoop QA commented on HDFS-4365:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563789/HDFS-4365.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3796//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3796//console

This message is automatically generated.

> Add junit timeout to TestBalancerWithNodeGroup
> --
>
> Key: HDFS-4365
> URL: https://issues.apache.org/jira/browse/HDFS-4365
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Trivial
> Attachments: HDFS-4365.001.patch
>
>
> TestBalancerWithNodeGroup should have a junit timeout so that when it fails, 
> we can easily identify it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4363) Combine PBHelper and HdfsProtoUtil and remove redundant methods

2013-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547225#comment-13547225
 ] 

Hadoop QA commented on HDFS-4363:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563788/HDFS-4363.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3795//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3795//console

This message is automatically generated.

> Combine PBHelper and HdfsProtoUtil and remove redundant methods
> ---
>
> Key: HDFS-4363
> URL: https://issues.apache.org/jira/browse/HDFS-4363
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: HDFS-4363.patch, HDFS-4363.patch, HDFS-4363.patch
>
>
> There are many methods overlapping between PBHelper and HdfsProtoUtil. This 
> jira combines these two helper classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4031) Update findbugsExcludeFile.xml to include findbugs 2 exclusions

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547231#comment-13547231
 ] 

Hudson commented on HDFS-4031:
--

Integrated in Hadoop-trunk-Commit #3193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3193/])
HDFS-4031. Update findbugsExcludeFile.xml to include findbugs 2 exclusions. 
Contributed by Eli Collins (Revision 1430468)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430468
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml


> Update findbugsExcludeFile.xml to include findbugs 2 exclusions
> ---
>
> Key: HDFS-4031
> URL: https://issues.apache.org/jira/browse/HDFS-4031
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-4031.txt
>
>
> Findbugs 2 warns about some volatile increments (VO_VOLATILE_INCREMENT) that 
> unlike HDFS-4029 and HDFS-4030 are less problematic:
> - numFailedVolumes is only incremented in one thread and that access is 
> synchronized
> - pendingReceivedRequests in BPServiceActor is clearly synchronized
> It would be reasonable to make these Atomics as well but I think they're uses 
> are clearly correct so figured for these the warning was more obviously bogus 
> and so could be ignored.
> There's also a SE_BAD_FIELD_INNER_CLASS warning (LocalDatanodeInfo's 
> anonymous class is serializable but it is not) in BPServiceActor is OK to 
> ignore since we don't serialize LocalDatanodeInfo.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4030) BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should be AtomicLongs

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547230#comment-13547230
 ] 

Hudson commented on HDFS-4030:
--

Integrated in Hadoop-trunk-Commit #3193 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3193/])
HDFS-4030. BlockManager excessBlocksCount and 
postponedMisreplicatedBlocksCount should be AtomicLongs. Contributed by Eli 
Collins (Revision 1430462)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430462
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should 
> be AtomicLongs
> --
>
> Key: HDFS-4030
> URL: https://issues.apache.org/jira/browse/HDFS-4030
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-4030.txt, hdfs-4030.txt
>
>
> The BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount 
> fields are currently volatile longs which are incremented, which isn't thread 
> safe. It looks like they're always incremented on paths that hold the NN 
> write lock but it would be easier and less error prone for future changes if 
> we made them AtomicLongs. The other volatile long members are just set in one 
> thread and read in another so they're fine as is.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes

2013-01-08 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-4353:
--

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Colin.

> Encapsulate connections to peers in Peer and PeerServer classes
> ---
>
> Key: HDFS-4353
> URL: https://issues.apache.org/jira/browse/HDFS-4353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: 02b-cumulative.patch, 02c.patch, 02c.patch, 
> 02-cumulative.patch, 02d.patch, 02e.patch, 02f.patch
>
>
> Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} 
> classes.  Since many Java classes may be involved with these connections, it 
> makes sense to create a container for them.  For example, a connection to a 
> peer may have an input stream, output stream, readablebytechannel, encrypted 
> output stream, and encrypted input stream associated with it.
> This makes us less dependent on the {{NetUtils}} methods which use 
> {{instanceof}} to manipulate socket and stream states based on the runtime 
> type.  it also paves the way to introduce UNIX domain sockets which don't 
> inherit from {{java.net.Socket}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4350) Make enabling of stale marking on read and write paths independent

2013-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547242#comment-13547242
 ] 

Hadoop QA commented on HDFS-4350:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563793/hdfs-4350-3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3797//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3797//console

This message is automatically generated.

> Make enabling of stale marking on read and write paths independent
> --
>
> Key: HDFS-4350
> URL: https://issues.apache.org/jira/browse/HDFS-4350
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hdfs-4350-1.patch, hdfs-4350-2.patch, hdfs-4350-3.patch
>
>
> Marking of datanodes as stale for the read and write path was introduced in 
> HDFS-3703 and HDFS-3912 respectively. This is enabled using two new keys, 
> {{DFS_NAMENODE_CHECK_STALE_DATANODE_KEY}} and 
> {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY}}. However, there currently 
> exists a dependency, since you cannot enable write marking without also 
> enabling read marking, since the first key enables both checking of staleness 
> and read marking.
> I propose renaming the first key to 
> {{DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY}}, and make checking enabled 
> if either of the keys are set. This will allow read and write marking to be 
> enabled independently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4354) Create DomainSocket and DomainPeer and associated unit tests

2013-01-08 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547247#comment-13547247
 ] 

Todd Lipcon commented on HDFS-4354:
---

bq. It doesn't matter, since the STATUS_CLOSED_MASK bit won't make a number 
negative.

I agree, the results are the same, but conceptually it makes more sense...

Can you post a non-consolidated patch here for test-patch?

> Create DomainSocket and DomainPeer and associated unit tests
> 
>
> Key: HDFS-4354
> URL: https://issues.apache.org/jira/browse/HDFS-4354
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 03-cumulative.patch
>
>
> Create {{DomainSocket}}, a JNI class which provides UNIX domain sockets 
> functionality in Java.  Also create {{DomainPeer}}, {{DomainPeerServer}}.  
> This change also adds a unit test as well as {{TemporarySocketDirectory}}.
> Finally, this change adds a few C utility methods for handling JNI 
> exceptions, such as {{newException}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547249#comment-13547249
 ] 

Hudson commented on HDFS-4353:
--

Integrated in Hadoop-trunk-Commit #3194 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3194/])
HDFS-4353. Encapsulate connections to peers in Peer and PeerServer classes. 
Contributed by Colin Patrick McCabe. (Revision 1430507)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430507
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketInputStream.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketOutputStream.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/PeerCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/SocketCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/BasicInetPeer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/EncryptedPeer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/NioInetPeer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/Peer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/PeerServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientBlockVerification.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferKeepalive.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDisableConnCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPeerCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSocketCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java


> Encapsulate connections to peers in Peer and PeerServer classes
> ---
>
> Key: HDFS-4353
> URL: https://issues.apache.org/jira/browse/HDFS-4353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: 02b-cumulative.patch, 02c.p

[jira] [Updated] (HDFS-4354) Create DomainSocket and DomainPeer and associated unit tests

2013-01-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4354:
---

Attachment: 03b.patch

non-consolidated version

> Create DomainSocket and DomainPeer and associated unit tests
> 
>
> Key: HDFS-4354
> URL: https://issues.apache.org/jira/browse/HDFS-4354
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 03b.patch, 03-cumulative.patch
>
>
> Create {{DomainSocket}}, a JNI class which provides UNIX domain sockets 
> functionality in Java.  Also create {{DomainPeer}}, {{DomainPeerServer}}.  
> This change also adds a unit test as well as {{TemporarySocketDirectory}}.
> Finally, this change adds a few C utility methods for handling JNI 
> exceptions, such as {{newException}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4100) Fix all findbug security warings

2013-01-08 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-4100:
--

   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
   2.0.3-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've merged this to branch-2.

> Fix all findbug security warings
> 
>
> Key: HDFS-4100
> URL: https://issues.apache.org/jira/browse/HDFS-4100
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, journal-node, security
>Affects Versions: 1.1.0, 0.23.4, 3.0.0, 2.0.2-alpha
>Reporter: liang xie
>Assignee: liang xie
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4100-findbugs.xml, HDFS-4100.patch
>
>
> There're potential XSS risk due to lack of HTML excape

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-08 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-4261:
-

Attachment: test-balancer-with-node-group-timeout.txt

I just looped all of the balancer tests on my machine for an hour and a half 
and did end up with one timeout in TestBalancerWithNodeGroup. I'm attaching the 
thread dump to this JIRA.

Despite this, I think we should probably go ahead and commit this patch and 
file a new JIRA for this intermittent failure. This latest patch definitely 
fixes a few issues in the balancer, improves the balancer tests, and makes the 
tests fail much less frequently.

Unless anyone objects, I'll commit this patch later today and file a new JIRA 
for the intermittent failure.

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win,
>  test-balancer-with-node-group-timeout.txt
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4032) Specify the charset explicitly rather than rely on the default

2013-01-08 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547284#comment-13547284
 ] 

Aaron T. Myers commented on HDFS-4032:
--

Makes sense to me. Thanks for the explanation.

+1

> Specify the charset explicitly rather than rely on the default
> --
>
> Key: HDFS-4032
> URL: https://issues.apache.org/jira/browse/HDFS-4032
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-4032.txt
>
>
> Findbugs 2 warns about relying on the default Java charset instead of 
> specifying it explicitly. Given that we're porting Hadoop to different 
> platforms it's better to be explicit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4032) Specify the charset explicitly rather than rely on the default

2013-01-08 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547293#comment-13547293
 ] 

Todd Lipcon commented on HDFS-4032:
---

Agreed -- all JVMs by spec must support UTF8

> Specify the charset explicitly rather than rely on the default
> --
>
> Key: HDFS-4032
> URL: https://issues.apache.org/jira/browse/HDFS-4032
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-4032.txt
>
>
> Findbugs 2 warns about relying on the default Java charset instead of 
> specifying it explicitly. Given that we're porting Hadoop to different 
> platforms it's better to be explicit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3272) Make it possible to state MIME type for a webhdfs OPEN operation's result

2013-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547296#comment-13547296
 ] 

Hadoop QA commented on HDFS-3272:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563796/HDFS-3272.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3798//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3798//console

This message is automatically generated.

> Make it possible to state MIME type for a webhdfs OPEN operation's result
> -
>
> Key: HDFS-3272
> URL: https://issues.apache.org/jira/browse/HDFS-3272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 1.0.1, 2.0.2-alpha
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HDFS-3272.patch
>
>
> when you do a GET from the browser with webhdfs, you get the file, but it 
> comes over as a binary as the browser doesn't know what type it is. Having a 
> mime mapping table and such like would be one solution, but another is simply 
> to add a {{mime}} query parameter that would provide a string to be reflected 
> back to the caller as the Content-Type header in the HTTP response.
> e.g.
> {code}
> http://ranier:50070/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> would generate a 307 redirect to the datanode, with the 
> {code}
> http://dn1:50075/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> which would then generate the result
> {code}
> 200 OK
> Content-Type:text/csv
> GATE4,eb8bd736445f415e18886ba037f84829,55000,2007-01-14,14:01:54,
> GATE4,ec58edcce1049fa665446dc1fa690638,8030803000,2007-01-14,13:52:31,
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4033) Miscellaneous findbugs 2 fixes

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547300#comment-13547300
 ] 

Hudson commented on HDFS-4033:
--

Integrated in Hadoop-trunk-Commit #3195 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3195/])
HDFS-4033. Miscellaneous findbugs 2 fixes. Contributed by Eli Collins 
(Revision 1430534)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430534
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/ReplicaInputStreams.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/ReceivedDeletedBlockInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/RemoteEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/JMXGet.java


> Miscellaneous findbugs 2 fixes
> --
>
> Key: HDFS-4033
> URL: https://issues.apache.org/jira/browse/HDFS-4033
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-4033.txt, hdfs-4033.txt
>
>
> Fix some miscellaneous findbugs 2 warnings:
> - Switch statements missing default cases
> - Using \n instead of %n in format methods
> - A socket close that should use IOUtils#closeSocket that we missed
> - A use of SimpleDateFormat that is not threadsafe
> - In ReplicaInputStreams it's not clear that we always close the streams we 
> allocate, moving the stream creation into the class where we close them makes 
> that more obvious
> - A couple missing null checks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547303#comment-13547303
 ] 

Hadoop QA commented on HDFS-4261:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12563827/test-balancer-with-node-group-timeout.txt
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3801//console

This message is automatically generated.

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win,
>  test-balancer-with-node-group-timeout.txt
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths

2013-01-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4356:
---

Attachment: 04f-cumulative.patch

* remember to turn off socket path validation during junit tests where we 
create sockets in /tmp

* add findbugs exclude

* {{FileInputStreamCache#toString}}: use StringBuilder

> BlockReaderLocal should use passed file descriptors rather than paths
> -
>
> Key: HDFS-4356
> URL: https://issues.apache.org/jira/browse/HDFS-4356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 04b-cumulative.patch, 04-cumulative.patch, 
> 04d-cumulative.patch, 04f-cumulative.patch
>
>
> {{BlockReaderLocal}} should use file descriptors passed over UNIX domain 
> sockets rather than paths.  We also need some configuration options for these 
> UNIX domain sockets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4030) BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should be AtomicLongs

2013-01-08 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-4030:
--

  Resolution: Fixed
   Fix Version/s: 2.0.3-alpha
Target Version/s:   (was: 2.0.3-alpha)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've merged this to branch-2.

> BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount should 
> be AtomicLongs
> --
>
> Key: HDFS-4030
> URL: https://issues.apache.org/jira/browse/HDFS-4030
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-4030.txt, hdfs-4030.txt
>
>
> The BlockManager excessBlocksCount and postponedMisreplicatedBlocksCount 
> fields are currently volatile longs which are incremented, which isn't thread 
> safe. It looks like they're always incremented on paths that hold the NN 
> write lock but it would be easier and less error prone for future changes if 
> we made them AtomicLongs. The other volatile long members are just set in one 
> thread and read in another so they're fine as is.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4031) Update findbugsExcludeFile.xml to include findbugs 2 exclusions

2013-01-08 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-4031:
--

  Resolution: Fixed
   Fix Version/s: 2.0.3-alpha
Target Version/s:   (was: 2.0.3-alpha)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've merged this to branch-2.

> Update findbugsExcludeFile.xml to include findbugs 2 exclusions
> ---
>
> Key: HDFS-4031
> URL: https://issues.apache.org/jira/browse/HDFS-4031
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-4031.txt
>
>
> Findbugs 2 warns about some volatile increments (VO_VOLATILE_INCREMENT) that 
> unlike HDFS-4029 and HDFS-4030 are less problematic:
> - numFailedVolumes is only incremented in one thread and that access is 
> synchronized
> - pendingReceivedRequests in BPServiceActor is clearly synchronized
> It would be reasonable to make these Atomics as well but I think they're uses 
> are clearly correct so figured for these the warning was more obviously bogus 
> and so could be ignored.
> There's also a SE_BAD_FIELD_INNER_CLASS warning (LocalDatanodeInfo's 
> anonymous class is serializable but it is not) in BPServiceActor is OK to 
> ignore since we don't serialize LocalDatanodeInfo.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3272) Make it possible to state MIME type for a webhdfs OPEN operation's result

2013-01-08 Thread Jeff Markham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Markham updated HDFS-3272:
---

Attachment: HDFS-3272.patch

> Make it possible to state MIME type for a webhdfs OPEN operation's result
> -
>
> Key: HDFS-3272
> URL: https://issues.apache.org/jira/browse/HDFS-3272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 1.0.1, 2.0.2-alpha
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HDFS-3272.patch
>
>
> when you do a GET from the browser with webhdfs, you get the file, but it 
> comes over as a binary as the browser doesn't know what type it is. Having a 
> mime mapping table and such like would be one solution, but another is simply 
> to add a {{mime}} query parameter that would provide a string to be reflected 
> back to the caller as the Content-Type header in the HTTP response.
> e.g.
> {code}
> http://ranier:50070/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> would generate a 307 redirect to the datanode, with the 
> {code}
> http://dn1:50075/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> which would then generate the result
> {code}
> 200 OK
> Content-Type:text/csv
> GATE4,eb8bd736445f415e18886ba037f84829,55000,2007-01-14,14:01:54,
> GATE4,ec58edcce1049fa665446dc1fa690638,8030803000,2007-01-14,13:52:31,
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3272) Make it possible to state MIME type for a webhdfs OPEN operation's result

2013-01-08 Thread Jeff Markham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Markham updated HDFS-3272:
---

Attachment: (was: HDFS-3272.patch)

> Make it possible to state MIME type for a webhdfs OPEN operation's result
> -
>
> Key: HDFS-3272
> URL: https://issues.apache.org/jira/browse/HDFS-3272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 1.0.1, 2.0.2-alpha
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HDFS-3272.patch
>
>
> when you do a GET from the browser with webhdfs, you get the file, but it 
> comes over as a binary as the browser doesn't know what type it is. Having a 
> mime mapping table and such like would be one solution, but another is simply 
> to add a {{mime}} query parameter that would provide a string to be reflected 
> back to the caller as the Content-Type header in the HTTP response.
> e.g.
> {code}
> http://ranier:50070/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> would generate a 307 redirect to the datanode, with the 
> {code}
> http://dn1:50075/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> which would then generate the result
> {code}
> 200 OK
> Content-Type:text/csv
> GATE4,eb8bd736445f415e18886ba037f84829,55000,2007-01-14,14:01:54,
> GATE4,ec58edcce1049fa665446dc1fa690638,8030803000,2007-01-14,13:52:31,
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4354) Create DomainSocket and DomainPeer and associated unit tests

2013-01-08 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547323#comment-13547323
 ] 

Todd Lipcon commented on HDFS-4354:
---

+1 pending Jenkins results. In case anyone thinks this review looks 
suspiciously short, please refer to the multiple rounds of review done under 
the HDFS-347 jira, where Colin was working on a non-broken-out patch.

> Create DomainSocket and DomainPeer and associated unit tests
> 
>
> Key: HDFS-4354
> URL: https://issues.apache.org/jira/browse/HDFS-4354
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 03b.patch, 03-cumulative.patch
>
>
> Create {{DomainSocket}}, a JNI class which provides UNIX domain sockets 
> functionality in Java.  Also create {{DomainPeer}}, {{DomainPeerServer}}.  
> This change also adds a unit test as well as {{TemporarySocketDirectory}}.
> Finally, this change adds a few C utility methods for handling JNI 
> exceptions, such as {{newException}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4033) Miscellaneous findbugs 2 fixes

2013-01-08 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-4033:
--

  Resolution: Fixed
   Fix Version/s: 2.0.3-alpha
Target Version/s:   (was: 2.0.3-alpha)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've merged this to branch-2.

> Miscellaneous findbugs 2 fixes
> --
>
> Key: HDFS-4033
> URL: https://issues.apache.org/jira/browse/HDFS-4033
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-4033.txt, hdfs-4033.txt
>
>
> Fix some miscellaneous findbugs 2 warnings:
> - Switch statements missing default cases
> - Using \n instead of %n in format methods
> - A socket close that should use IOUtils#closeSocket that we missed
> - A use of SimpleDateFormat that is not threadsafe
> - In ReplicaInputStreams it's not clear that we always close the streams we 
> allocate, moving the stream creation into the class where we close them makes 
> that more obvious
> - A couple missing null checks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4034) Remove redundant null checks

2013-01-08 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-4034:
--

  Resolution: Fixed
   Fix Version/s: 2.0.3-alpha
Target Version/s:   (was: 2.0.3-alpha)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the review ATM. I've committed this and merged to branch-2.

> Remove redundant null checks
> 
>
> Key: HDFS-4034
> URL: https://issues.apache.org/jira/browse/HDFS-4034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-4034.txt, hdfs-4034.txt
>
>
> Findbugs 2 catches a number of places where we're checking for null in cases 
> where the value will never be null.
> We might need to wait until we switch to findbugs 2 to commit this as the 
> current findbugs may not be so smart.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4032) Specify the charset explicitly rather than rely on the default

2013-01-08 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547359#comment-13547359
 ] 

Eli Collins commented on HDFS-4032:
---

Thanks guys. Turns out the two test failures are related, am investigating.

> Specify the charset explicitly rather than rely on the default
> --
>
> Key: HDFS-4032
> URL: https://issues.apache.org/jira/browse/HDFS-4032
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Attachments: hdfs-4032.txt
>
>
> Findbugs 2 warns about relying on the default Java charset instead of 
> specifying it explicitly. Given that we're porting Hadoop to different 
> platforms it's better to be explicit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4362) GetDelegationTokenResponseProto does not handle null token

2013-01-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4362:
--

Fix Version/s: (was: 3.0.0)
   2.0.3-alpha

I committed this change to branch-2.

> GetDelegationTokenResponseProto does not handle null token
> --
>
> Key: HDFS-4362
> URL: https://issues.apache.org/jira/browse/HDFS-4362
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Critical
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4362.patch
>
>
> While working on HADOOP-9173, I notice that the 
> GetDelegationTokenResponseProto declares the token field as required. However 
> return of null token is to be expected both as defined in 
> FileSystem#getDelegationToken() and also based on HDFS implementation. This 
> jira intends to make the field as optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4253) block replica reads get hot-spots due to NetworkTopology#pseudoSortByDistance

2013-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547369#comment-13547369
 ] 

Hadoop QA commented on HDFS-4253:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563809/hdfs4253-6.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDFSShell

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3799//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3799//console

This message is automatically generated.

> block replica reads get hot-spots due to NetworkTopology#pseudoSortByDistance
> -
>
> Key: HDFS-4253
> URL: https://issues.apache.org/jira/browse/HDFS-4253
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Attachments: hdfs4253-1.txt, hdfs4253-2.txt, hdfs4253-3.txt, 
> hdfs4253-4.txt, hdfs4253-5.txt, hdfs4253-6.txt, hdfs4253.txt
>
>
> When many nodes (10) read from the same block simultaneously, we get 
> asymmetric distribution of read load.  This can result in slow block reads 
> when one replica is serving most of the readers and the other replicas are 
> idle.  The busy DN bottlenecks on its network link.
> This is especially visible with large block sizes and high replica counts (I 
> reproduced the problem with {{-Ddfs.block.size=4294967296}} and replication 
> 5), but the same behavior happens on a small scale with normal-sized blocks 
> and replication=3.
> The root of the problem is in {{NetworkTopology#pseudoSortByDistance}} which 
> explicitly does not try to spread traffic among replicas in a given rack -- 
> it only randomizes usage for off-rack replicas.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4035) LightWeightGSet and LightWeightHashSet increment a volatile without synchronization

2013-01-08 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-4035:
--

  Resolution: Fixed
   Fix Version/s: 2.0.3-alpha
Target Version/s:   (was: 2.0.3-alpha)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

release audit failure is unrelated, verified by running mvn apache-rat:check.

Thanks for the review ATM. I've committed this and merged to branch-2.

> LightWeightGSet and LightWeightHashSet increment a volatile without 
> synchronization
> ---
>
> Key: HDFS-4035
> URL: https://issues.apache.org/jira/browse/HDFS-4035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-4035.txt
>
>
> LightWeightGSet and LightWeightHashSet have a volatile modification field 
> that they use to detect updates while iterating so they can throw a 
> ConcurrentModificationException. Since these "LightWeight" classes are 
> explicitly "not thread safe" (eg access to their members is not synchronized) 
> then the current use is OK, we just need to update findbugsExcludeFile.xml to 
> exclude them.
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4035) LightWeightGSet and LightWeightHashSet increment a volatile without synchronization

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547372#comment-13547372
 ] 

Hudson commented on HDFS-4035:
--

Integrated in Hadoop-trunk-Commit #3196 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3196/])
HDFS-4035. LightWeightGSet and LightWeightHashSet increment a volatile 
without synchronization. Contributed by Eli Collins (Revision 1430595)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430595
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml


> LightWeightGSet and LightWeightHashSet increment a volatile without 
> synchronization
> ---
>
> Key: HDFS-4035
> URL: https://issues.apache.org/jira/browse/HDFS-4035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-4035.txt
>
>
> LightWeightGSet and LightWeightHashSet have a volatile modification field 
> that they use to detect updates while iterating so they can throw a 
> ConcurrentModificationException. Since these "LightWeight" classes are 
> explicitly "not thread safe" (eg access to their members is not synchronized) 
> then the current use is OK, we just need to update findbugsExcludeFile.xml to 
> exclude them.
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4033) Miscellaneous findbugs 2 fixes

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547374#comment-13547374
 ] 

Hudson commented on HDFS-4033:
--

Integrated in Hadoop-trunk-Commit #3196 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3196/])
Updated CHANGES.txt to add HDFS-4033. (Revision 1430581)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430581
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Miscellaneous findbugs 2 fixes
> --
>
> Key: HDFS-4033
> URL: https://issues.apache.org/jira/browse/HDFS-4033
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-4033.txt, hdfs-4033.txt
>
>
> Fix some miscellaneous findbugs 2 warnings:
> - Switch statements missing default cases
> - Using \n instead of %n in format methods
> - A socket close that should use IOUtils#closeSocket that we missed
> - A use of SimpleDateFormat that is not threadsafe
> - In ReplicaInputStreams it's not clear that we always close the streams we 
> allocate, moving the stream creation into the class where we close them makes 
> that more obvious
> - A couple missing null checks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4034) Remove redundant null checks

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547373#comment-13547373
 ] 

Hudson commented on HDFS-4034:
--

Integrated in Hadoop-trunk-Commit #3196 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3196/])
HDFS-4034. Remove redundant null checks. Contributed by Eli Collins 
(Revision 1430585)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430585
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeResourceChecker.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java


> Remove redundant null checks
> 
>
> Key: HDFS-4034
> URL: https://issues.apache.org/jira/browse/HDFS-4034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-4034.txt, hdfs-4034.txt
>
>
> Findbugs 2 catches a number of places where we're checking for null in cases 
> where the value will never be null.
> We might need to wait until we switch to findbugs 2 to commit this as the 
> current findbugs may not be so smart.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4366) Block Replication Policy Implementation May Skip Higher-Priority Blocks for Lower-Priority Blocks

2013-01-08 Thread Derek Dagit (JIRA)
Derek Dagit created HDFS-4366:
-

 Summary: Block Replication Policy Implementation May Skip 
Higher-Priority Blocks for Lower-Priority Blocks
 Key: HDFS-4366
 URL: https://issues.apache.org/jira/browse/HDFS-4366
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.5, 1.1.1, 3.0.0
Reporter: Derek Dagit
Assignee: Derek Dagit


In certain cases, higher-priority under-replicated blocks can be skipped by the 
replication policy implementation.  The current implementation maintains, for 
each priority level, an index into a list of blocks that are under-replicated.  
Together, the lists compose a priority queue (see note later about 
branch-0.23).  In some cases when blocks are removed from a list, the caller 
(BlockManager) properly handles the index into the list from which it removed a 
block.  In some other cases, the index remains stationary while the list 
changes.  Whenever this happens, and the removed block happened to be at or 
before the index, the implementation will skip over a block when selecting 
blocks for replication work.

In situations when entire racks are decommissioned, leading to many 
under-replicated blocks, loss of blocks can occur.


Background: HDFS-1765

This patch to trunk greatly improved the state of the replication policy 
implementation.  Prior to the patch, the following details were true:
* The block "priority queue" was no such thing: It was really set of 
trees that held blocks in natural ordering, that being by the blocks ID, which 
resulted in iterator walks over the blocks in pseudo-random order.
* There was only a single index into an iteration over all of the 
blocks...
* ... meaning the implementation was only successful in respecting 
priority levels on the first pass.  Overall, the behavior was a 
round-robin-type scheduling of blocks.

After the patch
* A proper priority queue is implemented, preserving log(n) operations 
while iterating over blocks in the order added.
* A separate index for each priority is key is kept...
* ... allowing for processing of the highest priority blocks first 
regardless of which priority had last been processed.

The change was suggested for branch-0.23 as well as trunk, but it does not 
appear to have been pulled in.


The problem:

Although the indices are now tracked in a better way, there is a 
synchronization issue since the indices are managed outside of methods to 
modify the contents of the queue.

Removal of a block from a priority level without adjusting the index can mean 
that the index then points to the block after the block it originally pointed 
to.  In the next round of scheduling for that priority level, the block 
originally pointed to by the index is skipped.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4366) Block Replication Policy Implementation May Skip Higher-Priority Blocks for Lower-Priority Blocks

2013-01-08 Thread Derek Dagit (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Dagit updated HDFS-4366:
--

Description: 
In certain cases, higher-priority under-replicated blocks can be skipped by the 
replication policy implementation.  The current implementation maintains, for 
each priority level, an index into a list of blocks that are under-replicated.  
Together, the lists compose a priority queue (see note later about 
branch-0.23).  In some cases when blocks are removed from a list, the caller 
(BlockManager) properly handles the index into the list from which it removed a 
block.  In some other cases, the index remains stationary while the list 
changes.  Whenever this happens, and the removed block happened to be at or 
before the index, the implementation will skip over a block when selecting 
blocks for replication work.

In situations when entire racks are decommissioned, leading to many 
under-replicated blocks, loss of blocks can occur.


Background: HDFS-1765

This patch to trunk greatly improved the state of the replication policy 
implementation.  Prior to the patch, the following details were true:
* The block "priority queue" was no such thing: It was really set of 
trees that held blocks in natural ordering, that being by the blocks ID, which 
resulted in iterator walks over the blocks in pseudo-random order.
* There was only a single index into an iteration over all of the 
blocks...
* ... meaning the implementation was only successful in respecting 
priority levels on the first pass.  Overall, the behavior was a 
round-robin-type scheduling of blocks.

After the patch
* A proper priority queue is implemented, preserving log n operations 
while iterating over blocks in the order added.
* A separate index for each priority is key is kept...
* ... allowing for processing of the highest priority blocks first 
regardless of which priority had last been processed.

The change was suggested for branch-0.23 as well as trunk, but it does not 
appear to have been pulled in.


The problem:

Although the indices are now tracked in a better way, there is a 
synchronization issue since the indices are managed outside of methods to 
modify the contents of the queue.

Removal of a block from a priority level without adjusting the index can mean 
that the index then points to the block after the block it originally pointed 
to.  In the next round of scheduling for that priority level, the block 
originally pointed to by the index is skipped.


  was:
In certain cases, higher-priority under-replicated blocks can be skipped by the 
replication policy implementation.  The current implementation maintains, for 
each priority level, an index into a list of blocks that are under-replicated.  
Together, the lists compose a priority queue (see note later about 
branch-0.23).  In some cases when blocks are removed from a list, the caller 
(BlockManager) properly handles the index into the list from which it removed a 
block.  In some other cases, the index remains stationary while the list 
changes.  Whenever this happens, and the removed block happened to be at or 
before the index, the implementation will skip over a block when selecting 
blocks for replication work.

In situations when entire racks are decommissioned, leading to many 
under-replicated blocks, loss of blocks can occur.


Background: HDFS-1765

This patch to trunk greatly improved the state of the replication policy 
implementation.  Prior to the patch, the following details were true:
* The block "priority queue" was no such thing: It was really set of 
trees that held blocks in natural ordering, that being by the blocks ID, which 
resulted in iterator walks over the blocks in pseudo-random order.
* There was only a single index into an iteration over all of the 
blocks...
* ... meaning the implementation was only successful in respecting 
priority levels on the first pass.  Overall, the behavior was a 
round-robin-type scheduling of blocks.

After the patch
* A proper priority queue is implemented, preserving log(n) operations 
while iterating over blocks in the order added.
* A separate index for each priority is key is kept...
* ... allowing for processing of the highest priority blocks first 
regardless of which priority had last been processed.

The change was suggested for branch-0.23 as well as trunk, but it does not 
appear to have been pulled in.


The problem:

Although the indices are now tracked in a better way, there is a 
synchronization issue since the indices are managed outside of methods to 
modify the contents of the queue.

Removal of a block from a priority level without adjusting the index can mean 
that the index then points to the block after the block it originally pointed 
to.  In the next round of scheduling for that priority level, t

[jira] [Updated] (HDFS-4244) Support deleting snapshots

2013-01-08 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4244:


Attachment: HDFS-4244.007.patch

Update the patch addressing Nicholas's comments. Also add a testcase in 
TestSnapshotBlocksMap to address blocksMap updating scenario.

> Support deleting snapshots
> --
>
> Key: HDFS-4244
> URL: https://issues.apache.org/jira/browse/HDFS-4244
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4244.001.patch, HDFS-4244.002.patch, 
> HDFS-4244.003.patch, HDFS-4244.004.patch, HDFS-4244.005.patch, 
> HDFS-4244.006.patch, HDFS-4244.007.patch
>
>
> Provide functionality to delete a snapshot, given the name of the snapshot 
> and the path to the directory where the snapshot was taken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4354) Create DomainSocket and DomainPeer and associated unit tests

2013-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547395#comment-13547395
 ] 

Hadoop QA commented on HDFS-4354:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563821/03b.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3800//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3800//console

This message is automatically generated.

> Create DomainSocket and DomainPeer and associated unit tests
> 
>
> Key: HDFS-4354
> URL: https://issues.apache.org/jira/browse/HDFS-4354
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 03b.patch, 03-cumulative.patch
>
>
> Create {{DomainSocket}}, a JNI class which provides UNIX domain sockets 
> functionality in Java.  Also create {{DomainPeer}}, {{DomainPeerServer}}.  
> This change also adds a unit test as well as {{TemporarySocketDirectory}}.
> Finally, this change adds a few C utility methods for handling JNI 
> exceptions, such as {{newException}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4366) Block Replication Policy Implementation May Skip Higher-Priority Blocks for Lower-Priority Blocks

2013-01-08 Thread Derek Dagit (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Dagit updated HDFS-4366:
--

Attachment: hdfs-4366-unittest.patch

My initial thought is to encapsulate the priority indices inside 
UnderReplicatedBlocks, which is where the priority queue and indices live 
anyway.

We could also guarantee the appropriate index is decremented properly on each 
call to remove.

I do not think we can know in most cases whether a particular block lies to the 
left or right of the index since the random look-up of blocks is implemented as 
a hash, whereas the index is an index into a doubly-linked list.  We would have 
to walk from the head or tail of the doubly-linked list to find the answer.

Also, decrementing when we do not have to is not dangerous, since at worst it 
means we re-process a block that we would not have had to otherwise.  But we 
should also make sure to clamp the index at 0 to avoid unnecessary processing.  
Currently with the patch, the index can go negative.

Comments welcome


> Block Replication Policy Implementation May Skip Higher-Priority Blocks for 
> Lower-Priority Blocks
> -
>
> Key: HDFS-4366
> URL: https://issues.apache.org/jira/browse/HDFS-4366
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1.1.1, 3.0.0, 0.23.5
>Reporter: Derek Dagit
>Assignee: Derek Dagit
> Attachments: hdfs-4366-unittest.patch
>
>
> In certain cases, higher-priority under-replicated blocks can be skipped by 
> the replication policy implementation.  The current implementation maintains, 
> for each priority level, an index into a list of blocks that are 
> under-replicated.  Together, the lists compose a priority queue (see note 
> later about branch-0.23).  In some cases when blocks are removed from a list, 
> the caller (BlockManager) properly handles the index into the list from which 
> it removed a block.  In some other cases, the index remains stationary while 
> the list changes.  Whenever this happens, and the removed block happened to 
> be at or before the index, the implementation will skip over a block when 
> selecting blocks for replication work.
> In situations when entire racks are decommissioned, leading to many 
> under-replicated blocks, loss of blocks can occur.
> Background: HDFS-1765
> This patch to trunk greatly improved the state of the replication policy 
> implementation.  Prior to the patch, the following details were true:
>   * The block "priority queue" was no such thing: It was really set of 
> trees that held blocks in natural ordering, that being by the blocks ID, 
> which resulted in iterator walks over the blocks in pseudo-random order.
>   * There was only a single index into an iteration over all of the 
> blocks...
>   * ... meaning the implementation was only successful in respecting 
> priority levels on the first pass.  Overall, the behavior was a 
> round-robin-type scheduling of blocks.
> After the patch
>   * A proper priority queue is implemented, preserving log n operations 
> while iterating over blocks in the order added.
>   * A separate index for each priority is key is kept...
>   * ... allowing for processing of the highest priority blocks first 
> regardless of which priority had last been processed.
> The change was suggested for branch-0.23 as well as trunk, but it does not 
> appear to have been pulled in.
> The problem:
> Although the indices are now tracked in a better way, there is a 
> synchronization issue since the indices are managed outside of methods to 
> modify the contents of the queue.
> Removal of a block from a priority level without adjusting the index can mean 
> that the index then points to the block after the block it originally pointed 
> to.  In the next round of scheduling for that priority level, the block 
> originally pointed to by the index is skipped.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths

2013-01-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4356:
---

Attachment: 04g-cumulative.patch

fix some synchronization issues in {{FileInputStreamCache}}

> BlockReaderLocal should use passed file descriptors rather than paths
> -
>
> Key: HDFS-4356
> URL: https://issues.apache.org/jira/browse/HDFS-4356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 04b-cumulative.patch, 04-cumulative.patch, 
> 04d-cumulative.patch, 04f-cumulative.patch, 04g-cumulative.patch
>
>
> {{BlockReaderLocal}} should use file descriptors passed over UNIX domain 
> sockets rather than paths.  We also need some configuration options for these 
> UNIX domain sockets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths

2013-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547421#comment-13547421
 ] 

Hadoop QA commented on HDFS-4356:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563830/04f-cumulative.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestShortCircuitLocalRead
  org.apache.hadoop.hdfs.TestPersistBlocks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3802//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3802//console

This message is automatically generated.

> BlockReaderLocal should use passed file descriptors rather than paths
> -
>
> Key: HDFS-4356
> URL: https://issues.apache.org/jira/browse/HDFS-4356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 04b-cumulative.patch, 04-cumulative.patch, 
> 04d-cumulative.patch, 04f-cumulative.patch, 04g-cumulative.patch
>
>
> {{BlockReaderLocal}} should use file descriptors passed over UNIX domain 
> sockets rather than paths.  We also need some configuration options for these 
> UNIX domain sockets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547460#comment-13547460
 ] 

Suresh Srinivas commented on HDFS-4352:
---

Todd, can you please wait for +1 from jenkins before committing a patch? Please 
see the javadoc warnings comment above.

Few more comments:
{code}
-   * @return New BlockReader instance, or null on error.
+   * @param paramsThe parameters
+   *
+   * @return  New BlockReader instance
{code}
Why make this document change. Isn't previous @return documented better than 
the new one?

# The earlier version of the code documented each of the parameter. Now all the 
document is removed. There is no documentation of params members. 
MiniDFSCluster has nice documentation for a similar code for the Builder class.
# All this said, I have hard time understanding why this is a good code change. 
This kind of change was made earlier when we had a lot of variants of a method 
with different parameter combination. That is not the case here. I agree with 
Nicholas that this does not add much value, in fact makes the code less clear.



> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4364) GetLinkTargetResponseProto does not handle null path

2013-01-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547467#comment-13547467
 ] 

Suresh Srinivas commented on HDFS-4364:
---

I am marking this as blocker for 2.0.3-alpha.

> GetLinkTargetResponseProto does not handle null path
> 
>
> Key: HDFS-4364
> URL: https://issues.apache.org/jira/browse/HDFS-4364
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Attachments: HDFS-4364.patch
>
>
> ClientProtocol#getLinkTarget() returns null targetPath. Hence protobuf 
> definition GetLinkTargetResponseProto#targetPath should be optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4364) GetLinkTargetResponseProto does not handle null path

2013-01-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4364:
--

Priority: Blocker  (was: Major)

> GetLinkTargetResponseProto does not handle null path
> 
>
> Key: HDFS-4364
> URL: https://issues.apache.org/jira/browse/HDFS-4364
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Blocker
> Attachments: HDFS-4364.patch
>
>
> ClientProtocol#getLinkTarget() returns null targetPath. Hence protobuf 
> definition GetLinkTargetResponseProto#targetPath should be optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4364) GetLinkTargetResponseProto does not handle null path

2013-01-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4364:
--

Target Version/s: 2.0.3-alpha

> GetLinkTargetResponseProto does not handle null path
> 
>
> Key: HDFS-4364
> URL: https://issues.apache.org/jira/browse/HDFS-4364
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Blocker
> Attachments: HDFS-4364.patch
>
>
> ClientProtocol#getLinkTarget() returns null targetPath. Hence protobuf 
> definition GetLinkTargetResponseProto#targetPath should be optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4366) Block Replication Policy Implementation May Skip Higher-Priority Blocks for Lower-Priority Blocks

2013-01-08 Thread Derek Dagit (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Dagit updated HDFS-4366:
--

Affects Version/s: (was: 1.1.1)

> Block Replication Policy Implementation May Skip Higher-Priority Blocks for 
> Lower-Priority Blocks
> -
>
> Key: HDFS-4366
> URL: https://issues.apache.org/jira/browse/HDFS-4366
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 0.23.5
>Reporter: Derek Dagit
>Assignee: Derek Dagit
> Attachments: hdfs-4366-unittest.patch
>
>
> In certain cases, higher-priority under-replicated blocks can be skipped by 
> the replication policy implementation.  The current implementation maintains, 
> for each priority level, an index into a list of blocks that are 
> under-replicated.  Together, the lists compose a priority queue (see note 
> later about branch-0.23).  In some cases when blocks are removed from a list, 
> the caller (BlockManager) properly handles the index into the list from which 
> it removed a block.  In some other cases, the index remains stationary while 
> the list changes.  Whenever this happens, and the removed block happened to 
> be at or before the index, the implementation will skip over a block when 
> selecting blocks for replication work.
> In situations when entire racks are decommissioned, leading to many 
> under-replicated blocks, loss of blocks can occur.
> Background: HDFS-1765
> This patch to trunk greatly improved the state of the replication policy 
> implementation.  Prior to the patch, the following details were true:
>   * The block "priority queue" was no such thing: It was really set of 
> trees that held blocks in natural ordering, that being by the blocks ID, 
> which resulted in iterator walks over the blocks in pseudo-random order.
>   * There was only a single index into an iteration over all of the 
> blocks...
>   * ... meaning the implementation was only successful in respecting 
> priority levels on the first pass.  Overall, the behavior was a 
> round-robin-type scheduling of blocks.
> After the patch
>   * A proper priority queue is implemented, preserving log n operations 
> while iterating over blocks in the order added.
>   * A separate index for each priority is key is kept...
>   * ... allowing for processing of the highest priority blocks first 
> regardless of which priority had last been processed.
> The change was suggested for branch-0.23 as well as trunk, but it does not 
> appear to have been pulled in.
> The problem:
> Although the indices are now tracked in a better way, there is a 
> synchronization issue since the indices are managed outside of methods to 
> modify the contents of the queue.
> Removal of a block from a priority level without adjusting the index can mean 
> that the index then points to the block after the block it originally pointed 
> to.  In the next round of scheduling for that priority level, the block 
> originally pointed to by the index is skipped.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3272) Make it possible to state MIME type for a webhdfs OPEN operation's result

2013-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547499#comment-13547499
 ] 

Hadoop QA commented on HDFS-3272:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563833/HDFS-3272.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3803//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3803//console

This message is automatically generated.

> Make it possible to state MIME type for a webhdfs OPEN operation's result
> -
>
> Key: HDFS-3272
> URL: https://issues.apache.org/jira/browse/HDFS-3272
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 1.0.1, 2.0.2-alpha
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HDFS-3272.patch
>
>
> when you do a GET from the browser with webhdfs, you get the file, but it 
> comes over as a binary as the browser doesn't know what type it is. Having a 
> mime mapping table and such like would be one solution, but another is simply 
> to add a {{mime}} query parameter that would provide a string to be reflected 
> back to the caller as the Content-Type header in the HTTP response.
> e.g.
> {code}
> http://ranier:50070/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> would generate a 307 redirect to the datanode, with the 
> {code}
> http://dn1:50075/webhdfs/v1/results/Debounce/part-r-0.csv?op=open&mime=text/csv
>  
> {code}
> which would then generate the result
> {code}
> 200 OK
> Content-Type:text/csv
> GATE4,eb8bd736445f415e18886ba037f84829,55000,2007-01-14,14:01:54,
> GATE4,ec58edcce1049fa665446dc1fa690638,8030803000,2007-01-14,13:52:31,
> ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2013-01-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547504#comment-13547504
 ] 

Junping Du commented on HDFS-4261:
--

Thanks Chris and Aaron for verification. +1 to open another JIRA to track this 
very occasional timeout (only one time in looped tests in 1.5 hour).

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch, HDFS-4261-v5.patch, HDFS-4261-v6.patch, 
> HDFS-4261-v7.patch, HDFS-4261-v8.patch, jstack-mac-18567, jstack-win-5488, 
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.mac,
>  
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup-output.txt.win,
>  test-balancer-with-node-group-timeout.txt
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4356) BlockReaderLocal should use passed file descriptors rather than paths

2013-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547522#comment-13547522
 ] 

Hadoop QA commented on HDFS-4356:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563850/04g-cumulative.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestShortCircuitLocalRead
  org.apache.hadoop.hdfs.TestParallelShortCircuitRead

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3804//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3804//console

This message is automatically generated.

> BlockReaderLocal should use passed file descriptors rather than paths
> -
>
> Key: HDFS-4356
> URL: https://issues.apache.org/jira/browse/HDFS-4356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client, performance
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 04b-cumulative.patch, 04-cumulative.patch, 
> 04d-cumulative.patch, 04f-cumulative.patch, 04g-cumulative.patch
>
>
> {{BlockReaderLocal}} should use file descriptors passed over UNIX domain 
> sockets rather than paths.  We also need some configuration options for these 
> UNIX domain sockets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547529#comment-13547529
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4352:
--

> ...  If not, we can revert this. ...

Sure, let's revert this.

> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HDFS-4352:
--


> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes

2013-01-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HDFS-4353:
--


Reopen for reverting this and HDFS-4352.

> Encapsulate connections to peers in Peer and PeerServer classes
> ---
>
> Key: HDFS-4353
> URL: https://issues.apache.org/jira/browse/HDFS-4353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 3.0.0
>
> Attachments: 02b-cumulative.patch, 02c.patch, 02c.patch, 
> 02-cumulative.patch, 02d.patch, 02e.patch, 02f.patch
>
>
> Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} 
> classes.  Since many Java classes may be involved with these connections, it 
> makes sense to create a container for them.  For example, a connection to a 
> peer may have an input stream, output stream, readablebytechannel, encrypted 
> output stream, and encrypted input stream associated with it.
> This makes us less dependent on the {{NetUtils}} methods which use 
> {{instanceof}} to manipulate socket and stream states based on the runtime 
> type.  it also paves the way to introduce UNIX domain sockets which don't 
> inherit from {{java.net.Socket}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes

2013-01-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4353:
-

Fix Version/s: (was: 3.0.0)

> Encapsulate connections to peers in Peer and PeerServer classes
> ---
>
> Key: HDFS-4353
> URL: https://issues.apache.org/jira/browse/HDFS-4353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 02b-cumulative.patch, 02c.patch, 02c.patch, 
> 02-cumulative.patch, 02d.patch, 02e.patch, 02f.patch
>
>
> Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} 
> classes.  Since many Java classes may be involved with these connections, it 
> makes sense to create a container for them.  For example, a connection to a 
> peer may have an input stream, output stream, readablebytechannel, encrypted 
> output stream, and encrypted input stream associated with it.
> This makes us less dependent on the {{NetUtils}} methods which use 
> {{instanceof}} to manipulate socket and stream states based on the runtime 
> type.  it also paves the way to introduce UNIX domain sockets which don't 
> inherit from {{java.net.Socket}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4352:
-

Fix Version/s: (was: 3.0.0)

> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4353) Encapsulate connections to peers in Peer and PeerServer classes

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547563#comment-13547563
 ] 

Hudson commented on HDFS-4353:
--

Integrated in Hadoop-trunk-Commit #3197 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3197/])
svn merge -c -1430507 . for reverting HDFS-4353. Encapsulate connections to 
peers in Peer and PeerServer classes (Revision 1430662)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430662
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketInputStream.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketOutputStream.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/PeerCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/SocketCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientBlockVerification.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestConnCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferKeepalive.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDisableConnCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPeerCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSocketCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java


> Encapsulate connections to peers in Peer and PeerServer classes
> ---
>
> Key: HDFS-4353
> URL: https://issues.apache.org/jira/browse/HDFS-4353
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: 02b-cumulative.patch, 02c.patch, 02c.patch, 
> 02-cumulative.patch, 02d.patch, 02e.patch, 02f.patch
>
>
> Encapsulate connections to peers into the {{Peer}} and {{PeerServer}} 
> classes.  Since many Java classes may be involved with these connections, it 
> makes sense to create a container for them.  For example, a connection to a 
> peer may have an input stream, output stream, readablebytechannel, encrypted 
> output stream, and encrypted input stream associated with it.
> This makes us less dependent on the {{NetUtils}} methods which use 
> {{instanceof}} to manipulate socket and stream states based on the runtime 
> type.  it also paves the way to introduce UNIX domain sockets which don't 
> inherit from {{java.net.Sock

[jira] [Resolved] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4352.
--

Resolution: Won't Fix
  Assignee: (was: Colin Patrick McCabe)

Since this is generally not a good idea, let's close this as "won't fix".

> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HDFS-4352:
--


> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4352.
--

Resolution: Invalid

Closing as "Invalid" is more appropriate.

> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547568#comment-13547568
 ] 

Todd Lipcon commented on HDFS-4352:
---

I disagree that this "isn't a good idea". Functions with 10+ unnamed parameters 
are bad style in my book (and in other books, like "Effective Java", eg 
http://www.informit.com/articles/article.aspx?p=1216151&seqNum=2)

Do you have a better solution to reduce the number of parameters here? I've 
looked at this code for many years and always find it gives me a headache 
trying to match up the parameter list against the signature, whereas Colin's 
patch makes it obvious which parameter is what.

> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547570#comment-13547570
 ] 

Hudson commented on HDFS-4352:
--

Integrated in Hadoop-trunk-Commit #3198 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3198/])
svn merge -c -1428729 . for reverting HDFS-4352. Encapsulate arguments to 
BlockReaderFactory in a class (Revision 1430663)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1430663
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/BlockReaderTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithDFS.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java


> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reopened HDFS-4352:
---


By the way, I find it _very_ rude to close someone else's ticket as "Invalid" 
or "Wont fix" without waiting for the discussion to end. Just because you don't 
like a change doesn't give you license to do this.

> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547573#comment-13547573
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4352:
--

> I disagree that this "isn't a good idea". Functions with 10+ unnamed 
> parameters are bad style in my book (and in other books, like "Effective 
> Java", eg http://www.informit.com/articles/article.aspx?p=1216151&seqNum=2)

I don't know how your interpret the article.  The title clearly states that 
"Item 2: Consider a builder when faced with many constructor parameters".  Are 
you talking about *constructors* here?

> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547578#comment-13547578
 ] 

Todd Lipcon commented on HDFS-4352:
---

Yes, newBlockReader is essentially a wrapper for constructors (aka a "static 
factory method").

If you think we should instead have a Builder class, and a {{build}} method 
which returns BlockReader, then we'd be following the pattern more closely. I'd 
be fine with that as well - only a small difference from what Colin proposed 
here. Either is a lot better than the 8 or 12-argument methods we've got.

> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547579#comment-13547579
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4352:
--

> By the way, I find it very rude to close someone else's ticket as "Invalid" 
> or "Wont fix" without waiting for the discussion to end. ...

I am sorry that you feel that way.  I think Colin and I have discussed this 
issue for few recent days.  Suresh also has joined the discussion at the end.  
Any more to add?

> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547581#comment-13547581
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4352:
--

> ... a wrapper for constructors (aka a "static factory method").

A wrapper for constructors is a wrapper but not a constructor.

> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4367) GetDataEncryptionKeyResponseProto does not handle null response

2013-01-08 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HDFS-4367:
-

 Summary: GetDataEncryptionKeyResponseProto  does not handle null 
response
 Key: HDFS-4367
 URL: https://issues.apache.org/jira/browse/HDFS-4367
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.2-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Blocker


GetDataEncryptionKeyResponseProto member dataEncryptionKey should be optional 
to handle null response.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4352) Encapsulate arguments to BlockReaderFactory in a class

2013-01-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13547594#comment-13547594
 ] 

Colin Patrick McCabe commented on HDFS-4352:


Although I don't feel strongly about it, I agree with Todd-- using a builder 
here would be better than having 8 to 12 argument methods.  We use the builder 
pattern in many other places, such as {{MiniDFSCluster#Builder}}, 
{{DFSTestUtil#Builder}}, and so forth.  There are even builders in the standard 
library like {{StringBuilder}}.  That pattern should be familiar to everyone.  
It would have been nice if the discussion had moved in that direction, and 
hopefully there is still a chance to consider that.

Nicholas, when I said we could revert this, I was referring to this JIRA-- not 
to HDFS-4353.  It seems very irregular to revert HDFS-4353 with no community 
discussion.

> Encapsulate arguments to BlockReaderFactory in a class
> --
>
> Key: HDFS-4352
> URL: https://issues.apache.org/jira/browse/HDFS-4352
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
> Attachments: 01b.patch, 01.patch
>
>
> Encapsulate the arguments to BlockReaderFactory in a class to avoid having to 
> pass around 10+ arguments to a few different functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4367) GetDataEncryptionKeyResponseProto does not handle null response

2013-01-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4367:
--

Attachment: HDFS-4367.patch

> GetDataEncryptionKeyResponseProto  does not handle null response
> 
>
> Key: HDFS-4367
> URL: https://issues.apache.org/jira/browse/HDFS-4367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Blocker
> Attachments: HDFS-4367.patch
>
>
> GetDataEncryptionKeyResponseProto member dataEncryptionKey should be optional 
> to handle null response.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >