[jira] [Commented] (HDFS-9065) Include commas on # of files, blocks, total filesystem objects in NN Web UI

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744950#comment-14744950
 ] 

Hudson commented on HDFS-9065:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #393 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/393/])
HDFS-9065. Include commas on # of files, blocks, total filesystem objects in NN 
Web UI. Contributed by Daniel Templeton. (wheat9: rev 
d57d21c15942275bff6bb98876637950d73f)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dfs-dust.js
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Include commas on # of files, blocks, total filesystem objects in NN Web UI
> ---
>
> Key: HDFS-9065
> URL: https://issues.apache.org/jira/browse/HDFS-9065
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9065.001.patch, HDFS-9065.002.patch, 
> HDFS-9065.003.patch, HDFS-9065.003.patch
>
>
> Include commas on the number of files, blocks, and total filesystem objects 
> in the NN Web UI (please see example below) to make the numbers easier to 
> read.
> Current format:
> 3236 files and directories, 1409 blocks = 4645 total filesystem object(s).
> Proposed format:
> 3,236 files and directories, 1,409 blocks = 4,645 total filesystem object(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9022) Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client

2015-09-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9022:

Attachment: HDFS-9022.003.patch

The v3 patch rebases from {{trunk}} branch as the v2 patch command could not 
apply the patch during dryrun.

> Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client
> --
>
> Key: HDFS-9022
> URL: https://issues.apache.org/jira/browse/HDFS-9022
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9022.000.patch, HDFS-9022.001.patch, 
> HDFS-9022.002.patch, HDFS-9022.003.patch
>
>
> The static helper methods in NameNodes are used in {{hdfs-client}} module. 
> For example, it's used by the {{DFSClient}} and {{NameNodeProxies}} classes 
> which are being moved to {{hadoop-hdfs-client}} module. Meanwhile, we should 
> keep the {{NameNode}} class itself in the {{hadoop-hdfs}} module.
> This jira tracks the effort of moving the following static helper methods out 
> of  {{NameNode}} and thus {{hadoop-hdfs}} module. A good place to put these 
> methods is the {{DFSUtilClient}} class:
> {code}
> public static InetSocketAddress getAddress(String address);
> public static InetSocketAddress getAddress(Configuration conf);
> public static InetSocketAddress getAddress(URI filesystemURI);
> public static URI getUri(InetSocketAddress namenode);
> {code}
> Be cautious not to bring new checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8594) Erasure Coding: cache ErasureCodingZone

2015-09-14 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su resolved HDFS-8594.
-
Resolution: Not A Problem

> Erasure Coding: cache ErasureCodingZone
> ---
>
> Key: HDFS-8594
> URL: https://issues.apache.org/jira/browse/HDFS-8594
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
>Assignee: Walter Su
>
> scenario 1:
> We have 100m files in a EC zone. Everytime we open a file, we need to get 
> ECSchema(need get ECZone first). So getting EC zone is frequent.
> scenario 2:
> We have a EC zone "/d1". We have a file in "/d1/d2/d3/.../dN". We have to 
> search from xAttrs from dN, dN-1, ..., d3, d2, d1, until we find a EC zone 
> from d1's xAttr.
> It's better we cache the EC zone, like EncryptionZoneManager#encryptionZones



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8799) Erasure Coding: add tests for namenode processing corrupt striped blocks

2015-09-14 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8799:

Attachment: HDFS-8799-HDFS-7285.02.patch

Thanks [~tasanuma0829], [~zhz]. Updated the patch.

> Erasure Coding: add tests for namenode processing corrupt striped blocks
> 
>
> Key: HDFS-8799
> URL: https://issues.apache.org/jira/browse/HDFS-8799
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Attachments: HDFS-8799-HDFS-7285.01.patch, 
> HDFS-8799-HDFS-7285.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9080) update htrace version to 4.0

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744879#comment-14744879
 ] 

Hadoop QA commented on HDFS-9080:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  28m  6s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 10 new or modified test files. |
| {color:green}+1{color} | javac |  10m  3s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  13m 20s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 27s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m 55s | Site still builds. |
| {color:red}-1{color} | checkstyle |   3m  4s | The applied patch generated  
10 new checkstyle issues (total was 681, now 680). |
| {color:red}-1{color} | whitespace |   0m 57s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   2m  8s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 44s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   7m 47s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  23m 23s | Tests failed in 
hadoop-common. |
| {color:red}-1{color} | hdfs tests |  32m 11s | Tests failed in hadoop-hdfs. |
| {color:red}-1{color} | hdfs tests |   0m 22s | Tests failed in 
hadoop-hdfs-client. |
| | | 127m 13s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.fs.viewfs.TestFcCreateMkdirLocalFs |
|   | hadoop.fs.viewfs.TestViewFsLocalFs |
|   | hadoop.fs.viewfs.TestFcPermissionsLocalFs |
|   | hadoop.fs.TestLocalFsFCStatistics |
|   | hadoop.fs.TestS3_LocalFileContextURI |
|   | hadoop.fs.TestLocal_S3FileContextURI |
|   | hadoop.fs.TestSymlinkLocalFSFileContext |
|   | hadoop.fs.TestFcLocalFsPermission |
|   | hadoop.fs.TestLocalFSFileContextMainOperations |
|   | hadoop.fs.TestLocalFSFileContextCreateMkdir |
|   | hadoop.fs.TestFileContextDeleteOnExit |
|   | hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs |
|   | hadoop.fs.viewfs.TestChRootedFs |
|   | hadoop.fs.TestFsShell |
|   | hadoop.fs.viewfs.TestFcMainOperationsLocalFs |
|   | hadoop.fs.TestFcLocalFsUtil |
|   | hadoop.hdfs.TestFileStatus |
|   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
|   | hadoop.hdfs.server.namenode.TestINodeFile |
|   | hadoop.fs.contract.hdfs.TestHDFSContractOpen |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestDatanodeRestart |
|   | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.TestEncryptionZonesWithHA |
|   | hadoop.fs.contract.hdfs.TestHDFSContractMkdir |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.server.namenode.TestNameNodeXAttr |
|   | hadoop.hdfs.qjournal.TestMiniJournalCluster |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.hdfs.server.datanode.TestCachingStrategy |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.cli.TestXAttrCLI |
|   | hadoop.hdfs.server.namenode.TestDeleteRace |
|   | hadoop.hdfs.server.namenode.TestParallelImageWrite |
|   | hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys |
|   | hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional |
|   | hadoop.hdfs.server.namenode.TestSaveNamespace |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.TestDFSRename |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.datanode.TestDeleteBlockPool |
|   | hadoop.hdfs.TestRemoteBlockReader2 |
|   | hadoop.hdfs.server.namenode.TestStorageRestore |
|   | hadoop.hdfs.server.namenode.TestFileLimit |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.fs.contract.hdfs.TestHDFSContractSetTimes |
|   | hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots |
|   | hadoop.hdfs.qjournal.TestNNWithQJM |
|

[jira] [Commented] (HDFS-9065) Include commas on # of files, blocks, total filesystem objects in NN Web UI

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744877#comment-14744877
 ] 

Hudson commented on HDFS-9065:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #387 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/387/])
HDFS-9065. Include commas on # of files, blocks, total filesystem objects in NN 
Web UI. Contributed by Daniel Templeton. (wheat9: rev 
d57d21c15942275bff6bb98876637950d73f)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dfs-dust.js
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js


> Include commas on # of files, blocks, total filesystem objects in NN Web UI
> ---
>
> Key: HDFS-9065
> URL: https://issues.apache.org/jira/browse/HDFS-9065
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9065.001.patch, HDFS-9065.002.patch, 
> HDFS-9065.003.patch, HDFS-9065.003.patch
>
>
> Include commas on the number of files, blocks, and total filesystem objects 
> in the NN Web UI (please see example below) to make the numbers easier to 
> read.
> Current format:
> 3236 files and directories, 1409 blocks = 4645 total filesystem object(s).
> Proposed format:
> 3,236 files and directories, 1,409 blocks = 4,645 total filesystem object(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744875#comment-14744875
 ] 

Hudson commented on HDFS-9010:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2334 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2334/])
HDFS-9010. Replace NameNode.DEFAULT_PORT with 
HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key. Contributed by 
Mingliang Liu. (wheat9: rev 76957a485b526468498f93e443544131a88b5684)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithHANameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDefaultNameNodePort.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/NNHAServiceTarget.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendSnapshotTruncate.java


> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch, 
> HDFS-9010.005.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8899) Erasure Coding: use threadpool for EC recovery tasks

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744873#comment-14744873
 ] 

Hadoop QA commented on HDFS-8899:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  15m 55s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 52s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  8s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 16s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 32s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m 35s | The patch appears to introduce 4 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  68m  8s | Tests failed in hadoop-hdfs. |
| | | 110m 53s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade |
|   | hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality |
|   | hadoop.hdfs.server.namenode.TestRecoverStripedBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.server.blockmanagement.TestBlockInfoStriped |
|   | hadoop.hdfs.server.datanode.TestStorageReport |
|   | hadoop.hdfs.server.namenode.TestFSPermissionChecker |
|   | hadoop.hdfs.server.namenode.TestGenericJournalConf |
|   | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.TestSetrepIncreasing |
|   | hadoop.hdfs.server.namenode.TestSecondaryWebUi |
|   | hadoop.hdfs.TestFetchImage |
|   | hadoop.hdfs.server.namenode.TestGetBlockLocations |
|   | hadoop.hdfs.server.datanode.TestReadOnlySharedStorage |
|   | hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots |
|   | hadoop.hdfs.server.namenode.TestMetadataVersionOutput |
|   | 
hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters |
|   | hadoop.hdfs.server.namenode.TestNameNodeResourceChecker |
|   | hadoop.hdfs.TestMiniDFSCluster |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.TestReplication |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestDFSOutputStream |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica |
|   | hadoop.hdfs.server.namenode.TestQuotaWithStripedBlocks |
|   | hadoop.hdfs.TestClientBlockVerification |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages |
|   | hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot |
|   | hadoop.hdfs.server.namenode.TestCreateEditsLog |
|   | hadoop.cli.TestErasureCodingCLI |
|   | hadoop.tools.TestJMXGet |
|   | hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA |
|   | hadoop.hdfs.TestParallelRead |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlockQueues |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlocks |
|   | hadoop.hdfs.TestHDFSTrash |
|   | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
|   | hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting |
|   | hadoop.hdfs.server.namenode.TestBlockUnderConstruction |
|   | hadoop.hdfs.server.datanode.TestIncrementalBlockReports |
|   | hadoop.hdfs.TestWriteRead |
|   | hadoop.hdfs.TestFileAppend4 |
|   | hadoop.hdfs.TestSnapshotCommands |
|   | hadoop.hdfs.TestBlocksScheduledCounter |
|   | hadoop.hdfs.TestDFSUtil |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.crypto.TestHdfsCryptoStreams |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.blockmanagement.TestHostFileManager |
|   | hadoop.hdfs.TestFSOutputSummer |
|   | hadoop.hdfs.server.namenode.TestDeleteRace |
|   | hadoop.hdfs.server.namenode.TestAddBlockRetry |
|   | hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens |
|   | hadoop.hdfs.server.n

[jira] [Commented] (HDFS-9022) Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744869#comment-14744869
 ] 

Hadoop QA commented on HDFS-9022:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  1s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755581/HDFS-9022.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d57 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12443/console |


This message was automatically generated.

> Move NameNode.getAddress() and NameNode.getUri() to hadoop-hdfs-client
> --
>
> Key: HDFS-9022
> URL: https://issues.apache.org/jira/browse/HDFS-9022
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9022.000.patch, HDFS-9022.001.patch, 
> HDFS-9022.002.patch
>
>
> The static helper methods in NameNodes are used in {{hdfs-client}} module. 
> For example, it's used by the {{DFSClient}} and {{NameNodeProxies}} classes 
> which are being moved to {{hadoop-hdfs-client}} module. Meanwhile, we should 
> keep the {{NameNode}} class itself in the {{hadoop-hdfs}} module.
> This jira tracks the effort of moving the following static helper methods out 
> of  {{NameNode}} and thus {{hadoop-hdfs}} module. A good place to put these 
> methods is the {{DFSUtilClient}} class:
> {code}
> public static InetSocketAddress getAddress(String address);
> public static InetSocketAddress getAddress(Configuration conf);
> public static InetSocketAddress getAddress(URI filesystemURI);
> public static URI getUri(InetSocketAddress namenode);
> {code}
> Be cautious not to bring new checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9079) Erasure coding: preallocate multiple generation stamps when creating striped blocks

2015-09-14 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang reassigned HDFS-9079:
---

Assignee: Zhe Zhang

> Erasure coding: preallocate multiple generation stamps when creating striped 
> blocks
> ---
>
> Key: HDFS-9079
> URL: https://issues.apache.org/jira/browse/HDFS-9079
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>
> A non-striped DataStreamer goes through the following steps in error handling:
> {code}
> 1) Finds error => 2) Asks NN for new GS => 3) Gets new GS from NN => 4) 
> Applies new GS to DN (createBlockOutputStream) => 5) Ack from DN => 6) 
> Updates block on NN
> {code}
> To simplify the above we can preallocate GS when NN creates a new striped 
> block group ({{FSN#createNewBlock}}). For each new striped block group we can 
> reserve {{NUM_PARITY_BLOCKS}} GS's. Then steps 1~3 in the above sequence can 
> be saved. If more than {{NUM_PARITY_BLOCKS}} errors have happened we 
> shouldn't try to further recover anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9065) Include commas on # of files, blocks, total filesystem objects in NN Web UI

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744859#comment-14744859
 ] 

Hudson commented on HDFS-9065:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2311 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2311/])
HDFS-9065. Include commas on # of files, blocks, total filesystem objects in NN 
Web UI. Contributed by Daniel Templeton. (wheat9: rev 
d57d21c15942275bff6bb98876637950d73f)
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dfs-dust.js
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


> Include commas on # of files, blocks, total filesystem objects in NN Web UI
> ---
>
> Key: HDFS-9065
> URL: https://issues.apache.org/jira/browse/HDFS-9065
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9065.001.patch, HDFS-9065.002.patch, 
> HDFS-9065.003.patch, HDFS-9065.003.patch
>
>
> Include commas on the number of files, blocks, and total filesystem objects 
> in the NN Web UI (please see example below) to make the numbers easier to 
> read.
> Current format:
> 3236 files and directories, 1409 blocks = 4645 total filesystem object(s).
> Proposed format:
> 3,236 files and directories, 1,409 blocks = 4,645 total filesystem object(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744860#comment-14744860
 ] 

Hudson commented on HDFS-9010:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2311 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2311/])
HDFS-9010. Replace NameNode.DEFAULT_PORT with 
HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key. Contributed by 
Mingliang Liu. (wheat9: rev 76957a485b526468498f93e443544131a88b5684)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDefaultNameNodePort.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendSnapshotTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/NNHAServiceTarget.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithHANameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java


> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch, 
> HDFS-9010.005.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9065) Include commas on # of files, blocks, total filesystem objects in NN Web UI

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744855#comment-14744855
 ] 

Hudson commented on HDFS-9065:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8452 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8452/])
HDFS-9065. Include commas on # of files, blocks, total filesystem objects in NN 
Web UI. Contributed by Daniel Templeton. (wheat9: rev 
d57d21c15942275bff6bb98876637950d73f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dfs-dust.js
* hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


> Include commas on # of files, blocks, total filesystem objects in NN Web UI
> ---
>
> Key: HDFS-9065
> URL: https://issues.apache.org/jira/browse/HDFS-9065
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9065.001.patch, HDFS-9065.002.patch, 
> HDFS-9065.003.patch, HDFS-9065.003.patch
>
>
> Include commas on the number of files, blocks, and total filesystem objects 
> in the NN Web UI (please see example below) to make the numbers easier to 
> read.
> Current format:
> 3236 files and directories, 1409 blocks = 4645 total filesystem object(s).
> Proposed format:
> 3,236 files and directories, 1,409 blocks = 4,645 total filesystem object(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744854#comment-14744854
 ] 

Hadoop QA commented on HDFS-9040:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m 31s | Findbugs (version ) appears to 
be broken on HDFS-7285. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 8 new or modified test files. |
| {color:green}+1{color} | javac |   7m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 27s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  0s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m 41s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 40s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m 49s | The patch appears to introduce 8 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 198m  0s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 43s | Tests passed in 
hadoop-hdfs-client. |
| | | 246m  6s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| FindBugs | module:hadoop-hdfs-client |
| Failed unit tests | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.TestFileAppend |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.hdfs.TestFileAppend3 |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755866/HDFS-9040-HDFS-7285.002.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / ce02b55 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12439/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12439/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12439/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-client.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12439/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12439/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12439/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12439/console |


This message was automatically generated.

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040-HDFS-7285.002.patch, HDFS-9040.00.patch, 
> HDFS-9040.001.wip.patch, HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8874) Add DN metrics for balancer and other block movement scenarios

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744812#comment-14744812
 ] 

Hadoop QA commented on HDFS-8874:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 50s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 15s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 21s | The applied patch generated  
12 new checkstyle issues (total was 158, now 165). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 26s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 33s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 110m 33s | Tests failed in hadoop-hdfs. |
| | | 156m  1s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.server.datanode.TestStorageReport |
|   | hadoop.hdfs.server.datanode.TestDatanodeRegister |
|   | hadoop.cli.TestCryptoAdminCLI |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.TestSetrepIncreasing |
|   | hadoop.hdfs.TestFetchImage |
|   | hadoop.hdfs.TestMiniDFSCluster |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.TestReplication |
|   | hadoop.hdfs.TestDFSOutputStream |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica |
|   | hadoop.hdfs.TestClientBlockVerification |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.server.datanode.TestDataNodeFSDataSetSink |
|   | hadoop.hdfs.TestParallelRead |
|   | hadoop.hdfs.TestHDFSTrash |
|   | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting |
|   | hadoop.hdfs.server.datanode.TestIncrementalBlockReports |
|   | hadoop.hdfs.TestWriteRead |
|   | hadoop.hdfs.TestFileAppend4 |
|   | hadoop.hdfs.TestSnapshotCommands |
|   | hadoop.hdfs.TestBlocksScheduledCounter |
|   | hadoop.hdfs.TestDFSUtil |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.crypto.TestHdfsCryptoStreams |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.datanode.TestSimulatedFSDataset |
|   | hadoop.hdfs.TestFSOutputSummer |
|   | hadoop.hdfs.server.datanode.TestDnRespectsBlockReportSplitThreshold |
|   | hadoop.hdfs.TestReservedRawPaths |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestDFSStorageStateRecovery |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeTransferSocketSize |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.TestBlockReaderLocal |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy |
|   | hadoop.hdfs.TestDFSPermission |
|   | hadoop.hdfs.security.TestDelegationToken |
|   | hadoop.hdfs.TestSafeMode |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDatanodeProtocolRetryPolicy |
|   | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.hdfs.TestClientReportBadBlock |
|   | hadoop.cli.TestDeleteCLI |
|   | hadoop.hdfs.TestClose |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.TestSeekBug |
|   | hadoop.hdfs.TestFileAppend2 |
|   | hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy 
|
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.TestFSInputChecker |
|   | hadoop.tracing.TestTraceAdmin |
|   | hadoop.hdfs.server.blockmanagement.Test

[jira] [Commented] (HDFS-7986) Allow files / directories to be deleted from the NameNode UI

2015-09-14 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744797#comment-14744797
 ] 

Haohui Mai commented on HDFS-7986:
--

Thanks for the work.

{code}
+  
+
+  
+
+  ×
+  Delete
+
+
+  
+
+  
+
+
+
+Cancel
+Delete
+
+
+  
+
+  
{code}

Indentation is off.

{code}
+{pathSuffix}
+
{code}

When adding a new column, there should be an empty {{td}} in the  {{thead}} 
section as well.

{code}
+  
-{pathSuffix}
+{pathSuffix}
+
+$('.glyphicon-trash').click(function() {
+  var inode_name = $(this).closest('tr').attr('inode-path');
+  var absolute_file_path = append_path(current_directory, inode_name);
+  deletePath(inode_name, absolute_file_path);
+})
{code}

Instead of having the styles embedded in the code, a cleaner approach is to 
give the {{tr}} element a class and put CSS on its children. For example:

{code}
 

.explorer-entry .explorer-browse-links { cursor: pointer; }
.explorer-entry .glyphicon-trash { cursor: pointer; }
$('.explorer-entry .glyphicon-trash').click(...
{code}

Nit:

{code}
deletePath()
{code}

It might make sense to use to the underscore, not the caml naming to make the 
styles consistent with other parts of the code.

{code}
+
+$('#delete-modal').modal();
+
+  }
+
{code}

It'll look better to remove the empty lines.

> Allow files / directories to be deleted from the NameNode UI
> 
>
> Key: HDFS-7986
> URL: https://issues.apache.org/jira/browse/HDFS-7986
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-7986.01.patch
>
>
> Users should be able to delete files or directories using the Namenode UI.
> I'm thinking there ought to be a confirmation dialog. For directories 
> recursive should be set to true. Initially there should be no option to 
> skipTrash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-14 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744784#comment-14744784
 ] 

Surendra Singh Lilhore commented on HDFS-9076:
--

Failed test cases are related to HDFS-9067..

Please review..

> Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()
> -
>
> Key: HDFS-9076
> URL: https://issues.apache.org/jira/browse/HDFS-9076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9076.patch
>
>
> {code}
>try {
>   if (abort) {
> out.abort();
>   } else {
> out.close();
>   }
> } catch(IOException ie) {
>   LOG.error("Failed to " + (abort? "abort": "close") +
>   " inode " + inodeId, ie);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9067) o.a.h.hdfs.server.datanode.fsdataset.impl.TestLazyWriter is failing in trunk

2015-09-14 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744781#comment-14744781
 ] 

Surendra Singh Lilhore commented on HDFS-9067:
--

{code}
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement
{code}
For these two test separate jira is there  HDFS-9074, HDFS-9073.

other test cases are passed locally. 

> o.a.h.hdfs.server.datanode.fsdataset.impl.TestLazyWriter is failing in trunk
> 
>
> Key: HDFS-9067
> URL: https://issues.apache.org/jira/browse/HDFS-9067
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Surendra Singh Lilhore
>Priority: Critical
> Attachments: HDFS-9067-001.patch, HDFS-9067-002.patch, HDFS-9067.patch
>
>
> The test TestLazyWriter is consistently failing in trunk. For example:
> https://builds.apache.org/job/PreCommit-HDFS-Build/12407/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744773#comment-14744773
 ] 

Hudson commented on HDFS-9010:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1125 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1125/])
HDFS-9010. Replace NameNode.DEFAULT_PORT with 
HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key. Contributed by 
Mingliang Liu. (wheat9: rev 76957a485b526468498f93e443544131a88b5684)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendSnapshotTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDefaultNameNodePort.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithHANameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/NNHAServiceTarget.java


> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch, 
> HDFS-9010.005.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8996) Consolidate validateLog and scanLog in FJM#EditLogFile

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744771#comment-14744771
 ] 

Hudson commented on HDFS-8996:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #370 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/370/])
HDFS-8996. Consolidate validateLog and scanLog in FJM#EditLogFile (Zhe Zhang 
via Colin P. McCabe) (cmccabe: rev 53bad4eb008ec553dcdbe01e7ae975dcecde6590)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckPointForSecurityTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java


> Consolidate validateLog and scanLog in FJM#EditLogFile
> --
>
> Key: HDFS-8996
> URL: https://issues.apache.org/jira/browse/HDFS-8996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8996.00.patch, HDFS-8996.01.patch
>
>
> After HDFS-8965 is committed, {{scanEditLog}} will be identical to 
> {{validateEditLog}} in {{EditLogInputStream}} and {{FSEditlogLoader}}. This 
> is a place holder for us to remove the redundant {{scanEditLog}} code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8829) Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744769#comment-14744769
 ] 

Hudson commented on HDFS-8829:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #370 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/370/])
HDFS-8829. Make SO_RCVBUF and SO_SNDBUF size configurable for 
DataTransferProtocol sockets and allow configuring auto-tuning (He Tianyi via 
Colin P. McCabe) (cmccabe: rev 7b5cf5352efedc7d7ebdbb6b58f1b9a688812e75)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeTransferSocketSize.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DomainPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/PeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java


> Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol 
> sockets and allow configuring auto-tuning
> -
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Fix For: 2.8.0
>
> Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch, 
> HDFS-8829.0003.patch, HDFS-8829.0004.patch, HDFS-8829.0005.patch, 
> HDFS-8829.0006.patch
>
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9065) Include commas on # of files, blocks, total filesystem objects in NN Web UI

2015-09-14 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9065:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~templedf] for the 
contribution.

> Include commas on # of files, blocks, total filesystem objects in NN Web UI
> ---
>
> Key: HDFS-9065
> URL: https://issues.apache.org/jira/browse/HDFS-9065
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9065.001.patch, HDFS-9065.002.patch, 
> HDFS-9065.003.patch, HDFS-9065.003.patch
>
>
> Include commas on the number of files, blocks, and total filesystem objects 
> in the NN Web UI (please see example below) to make the numbers easier to 
> read.
> Current format:
> 3236 files and directories, 1409 blocks = 4645 total filesystem object(s).
> Proposed format:
> 3,236 files and directories, 1,409 blocks = 4,645 total filesystem object(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744765#comment-14744765
 ] 

Hudson commented on HDFS-9010:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #386 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/386/])
HDFS-9010. Replace NameNode.DEFAULT_PORT with 
HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key. Contributed by 
Mingliang Liu. (wheat9: rev 76957a485b526468498f93e443544131a88b5684)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithHANameNodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/NNHAServiceTarget.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDefaultNameNodePort.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendSnapshotTruncate.java


> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch, 
> HDFS-9010.005.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744761#comment-14744761
 ] 

Hudson commented on HDFS-9010:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #392 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/392/])
HDFS-9010. Replace NameNode.DEFAULT_PORT with 
HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key. Contributed by 
Mingliang Liu. (wheat9: rev 76957a485b526468498f93e443544131a88b5684)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithHANameNodes.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDefaultNameNodePort.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/NNHAServiceTarget.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendSnapshotTruncate.java


> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch, 
> HDFS-9010.005.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8996) Consolidate validateLog and scanLog in FJM#EditLogFile

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744760#comment-14744760
 ] 

Hudson commented on HDFS-8996:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2310 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2310/])
HDFS-8996. Consolidate validateLog and scanLog in FJM#EditLogFile (Zhe Zhang 
via Colin P. McCabe) (cmccabe: rev 53bad4eb008ec553dcdbe01e7ae975dcecde6590)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckPointForSecurityTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogTestUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java


> Consolidate validateLog and scanLog in FJM#EditLogFile
> --
>
> Key: HDFS-8996
> URL: https://issues.apache.org/jira/browse/HDFS-8996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8996.00.patch, HDFS-8996.01.patch
>
>
> After HDFS-8965 is committed, {{scanEditLog}} will be identical to 
> {{validateEditLog}} in {{EditLogInputStream}} and {{FSEditlogLoader}}. This 
> is a place holder for us to remove the redundant {{scanEditLog}} code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8829) Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744758#comment-14744758
 ] 

Hudson commented on HDFS-8829:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2310 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2310/])
HDFS-8829. Make SO_RCVBUF and SO_SNDBUF size configurable for 
DataTransferProtocol sockets and allow configuring auto-tuning (He Tianyi via 
Colin P. McCabe) (cmccabe: rev 7b5cf5352efedc7d7ebdbb6b58f1b9a688812e75)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeTransferSocketSize.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/PeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DomainPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java


> Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol 
> sockets and allow configuring auto-tuning
> -
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Fix For: 2.8.0
>
> Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch, 
> HDFS-8829.0003.patch, HDFS-8829.0004.patch, HDFS-8829.0005.patch, 
> HDFS-8829.0006.patch
>
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8899) Erasure Coding: use threadpool for EC recovery tasks

2015-09-14 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744757#comment-14744757
 ] 

Rakesh R commented on HDFS-8899:


Thank you [~zhz], [~hitliuyi], [~walter.k.su] for the reviews. Attached another 
patch addressing the comments, please take a look at it again.

bq. Maybe we can create a threadpool with corePoolSize is 2, and 
maximumPoolSize is 8?
Done.
bq. DN doesn't recover EC block all the time. It should be 
allowCoreThreadTimeOut(true). And need a good thread name for better diagnosis .
Done. Named the thread : {{stripedBlockRecovery-%d}}

> Erasure Coding: use threadpool for EC recovery tasks
> 
>
> Key: HDFS-8899
> URL: https://issues.apache.org/jira/browse/HDFS-8899
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8899-HDFS-7285-01.patch, 
> HDFS-8899-HDFS-7285-02.patch, HDFS-8899-HDFS-7285-merge-00.patch
>
>
> The idea is to use threadpool for processing erasure coding recovery tasks at 
> the datanode.
> {code}
> new Daemon(new ReconstructAndTransferBlock(recoveryInfo)).start();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8899) Erasure Coding: use threadpool for EC recovery tasks

2015-09-14 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8899:
---
Attachment: HDFS-8899-HDFS-7285-02.patch

> Erasure Coding: use threadpool for EC recovery tasks
> 
>
> Key: HDFS-8899
> URL: https://issues.apache.org/jira/browse/HDFS-8899
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8899-HDFS-7285-01.patch, 
> HDFS-8899-HDFS-7285-02.patch, HDFS-8899-HDFS-7285-merge-00.patch
>
>
> The idea is to use threadpool for processing erasure coding recovery tasks at 
> the datanode.
> {code}
> new Daemon(new ReconstructAndTransferBlock(recoveryInfo)).start();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9080) update htrace version to 4.0

2015-09-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-9080:
---
Attachment: HDFS-9080.001.patch

> update htrace version to 4.0
> 
>
> Key: HDFS-9080
> URL: https://issues.apache.org/jira/browse/HDFS-9080
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9080.001.patch
>
>
> Update the HTrace library version Hadoop uses to htrace 4.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9080) update htrace version to 4.0

2015-09-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-9080:
---
Status: Patch Available  (was: Open)

> update htrace version to 4.0
> 
>
> Key: HDFS-9080
> URL: https://issues.apache.org/jira/browse/HDFS-9080
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-9080.001.patch
>
>
> Update the HTrace library version Hadoop uses to htrace 4.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8899) Erasure Coding: use threadpool for EC recovery tasks

2015-09-14 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744742#comment-14744742
 ] 

Walter Su commented on HDFS-8899:
-

DN doesn't recover EC block all the time. It should be 
{{allowCoreThreadTimeOut(true)}}. And need a good thread name for better 
diagnosis .

> Erasure Coding: use threadpool for EC recovery tasks
> 
>
> Key: HDFS-8899
> URL: https://issues.apache.org/jira/browse/HDFS-8899
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8899-HDFS-7285-01.patch, 
> HDFS-8899-HDFS-7285-merge-00.patch
>
>
> The idea is to use threadpool for processing erasure coding recovery tasks at 
> the datanode.
> {code}
> new Daemon(new ReconstructAndTransferBlock(recoveryInfo)).start();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9080) update htrace version to 4.0

2015-09-14 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-9080:
--

 Summary: update htrace version to 4.0
 Key: HDFS-9080
 URL: https://issues.apache.org/jira/browse/HDFS-9080
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Update the HTrace library version Hadoop uses to htrace 4.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8996) Consolidate validateLog and scanLog in FJM#EditLogFile

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744716#comment-14744716
 ] 

Hudson commented on HDFS-8996:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2333 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2333/])
HDFS-8996. Consolidate validateLog and scanLog in FJM#EditLogFile (Zhe Zhang 
via Colin P. McCabe) (cmccabe: rev 53bad4eb008ec553dcdbe01e7ae975dcecde6590)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckPointForSecurityTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java


> Consolidate validateLog and scanLog in FJM#EditLogFile
> --
>
> Key: HDFS-8996
> URL: https://issues.apache.org/jira/browse/HDFS-8996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8996.00.patch, HDFS-8996.01.patch
>
>
> After HDFS-8965 is committed, {{scanEditLog}} will be identical to 
> {{validateEditLog}} in {{EditLogInputStream}} and {{FSEditlogLoader}}. This 
> is a place holder for us to remove the redundant {{scanEditLog}} code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8829) Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744714#comment-14744714
 ] 

Hudson commented on HDFS-8829:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2333 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2333/])
HDFS-8829. Make SO_RCVBUF and SO_SNDBUF size configurable for 
DataTransferProtocol sockets and allow configuring auto-tuning (He Tianyi via 
Colin P. McCabe) (cmccabe: rev 7b5cf5352efedc7d7ebdbb6b58f1b9a688812e75)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DomainPeerServer.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeTransferSocketSize.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/PeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol 
> sockets and allow configuring auto-tuning
> -
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Fix For: 2.8.0
>
> Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch, 
> HDFS-8829.0003.patch, HDFS-8829.0004.patch, HDFS-8829.0005.patch, 
> HDFS-8829.0006.patch
>
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8829) Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744704#comment-14744704
 ] 

Hudson commented on HDFS-8829:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1124 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1124/])
HDFS-8829. Make SO_RCVBUF and SO_SNDBUF size configurable for 
DataTransferProtocol sockets and allow configuring auto-tuning (He Tianyi via 
Colin P. McCabe) (cmccabe: rev 7b5cf5352efedc7d7ebdbb6b58f1b9a688812e75)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DomainPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeTransferSocketSize.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/PeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java


> Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol 
> sockets and allow configuring auto-tuning
> -
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Fix For: 2.8.0
>
> Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch, 
> HDFS-8829.0003.patch, HDFS-8829.0004.patch, HDFS-8829.0005.patch, 
> HDFS-8829.0006.patch
>
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8971) Remove guards when calling LOG.debug() and LOG.trace() in client package

2015-09-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-8971:

Description: 
We moved the {{shortcircuit}} package from {{hadoop-hdfs}} to 
{{hadoop-hdfs-client}} module in JIRA 
[HDFS-8934|https://issues.apache.org/jira/browse/HDFS-8934] and 
[HDFS-8951|https://issues.apache.org/jira/browse/HDFS-8951], and 
{{BlockReader}} in [HDFS-8925|https://issues.apache.org/jira/browse/HDFS-8925]. 
Meanwhile, we also replaced the _log4j_ log with _slf4j_ logger. There were 
existing code in the client package to guard the log when calling 
{{LOG.debug()}} and {{LOG.trace()}}, e.g. in {{ShortCircuitCache.java}}, we 
have code like this:
{code:title=Trace with guards|borderStyle=solid}
724if (LOG.isTraceEnabled()) {
725  LOG.trace(this + ": found waitable for " + key);
726}
{code}

In _slf4j_, this kind of guard is not necessary. We should clean the code by 
removing the guard from the client package.

{code:title=Trace without guards|borderStyle=solid}
724LOG.trace("{}: found waitable for {}", this, key);
{code}

  was:
We moved the {{shortcircuit}} package from {{hadoop-hdfs}} to 
{{hadoop-hdfs-client}} module in JIRA 
[HDFS-8934|https://issues.apache.org/jira/browse/HDFS-8934] and 
[HDFS-8951|https://issues.apache.org/jira/browse/HDFS-8951], and 
{{BlockReader}} in [HDFS-8925|https://issues.apache.org/jira/browse/HDFS-8925]. 
Meanwhile, we also replaced the _log4j_ log with _slf4j_ logger. There were 
existing code in the client package to guard the log when calling 
{{LOG.debug()}} and {{LOG.trace()}}, e.g. in {{ShortCircuitCache.java}}, we 
have code like this:
{code}
724if (LOG.isTraceEnabled()) {
725  LOG.trace(this + ": found waitable for " + key);
726}
{code}

In _slf4j_, this kind of guard is not necessary. We should clean the code by 
removing the guard from the client package.


> Remove guards when calling LOG.debug() and LOG.trace() in client package
> 
>
> Key: HDFS-8971
> URL: https://issues.apache.org/jira/browse/HDFS-8971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> We moved the {{shortcircuit}} package from {{hadoop-hdfs}} to 
> {{hadoop-hdfs-client}} module in JIRA 
> [HDFS-8934|https://issues.apache.org/jira/browse/HDFS-8934] and 
> [HDFS-8951|https://issues.apache.org/jira/browse/HDFS-8951], and 
> {{BlockReader}} in 
> [HDFS-8925|https://issues.apache.org/jira/browse/HDFS-8925]. Meanwhile, we 
> also replaced the _log4j_ log with _slf4j_ logger. There were existing code 
> in the client package to guard the log when calling {{LOG.debug()}} and 
> {{LOG.trace()}}, e.g. in {{ShortCircuitCache.java}}, we have code like this:
> {code:title=Trace with guards|borderStyle=solid}
> 724if (LOG.isTraceEnabled()) {
> 725  LOG.trace(this + ": found waitable for " + key);
> 726}
> {code}
> In _slf4j_, this kind of guard is not necessary. We should clean the code by 
> removing the guard from the client package.
> {code:title=Trace without guards|borderStyle=solid}
> 724LOG.trace("{}: found waitable for {}", this, key);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-14 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9010:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~liuml07] for the 
contribution.

> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch, 
> HDFS-9010.005.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9055) WebHDFS REST v2

2015-09-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744701#comment-14744701
 ] 

Allen Wittenauer commented on HDFS-9055:


bq. A big part of the goal of webhdfs is to support distCp between clusters on 
different major versions of Hadoop. 

A big part, but not the only goal.  I want to address the command and control 
goal.  WebHDFS is severely lacking. Even though we have a proxy server that 
speaks the exact same protocol and could offload the work, everyone is too 
focused on the NN to make this conversation viable.

Probably better to fork the proxy into something different to give ops teams 
the capabilities they want, using protocols they want, and close this as won't 
fix.

> WebHDFS REST v2
> ---
>
> Key: HDFS-9055
> URL: https://issues.apache.org/jira/browse/HDFS-9055
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>
> There's starting to be enough changes to fix and add missing functionality to 
> webhdfs that we should probably update to REST v2.  This also gives us an 
> opportunity to deal with some incompatible issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8996) Consolidate validateLog and scanLog in FJM#EditLogFile

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744686#comment-14744686
 ] 

Hudson commented on HDFS-8996:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #385 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/385/])
HDFS-8996. Consolidate validateLog and scanLog in FJM#EditLogFile (Zhe Zhang 
via Colin P. McCabe) (cmccabe: rev 53bad4eb008ec553dcdbe01e7ae975dcecde6590)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckPointForSecurityTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java


> Consolidate validateLog and scanLog in FJM#EditLogFile
> --
>
> Key: HDFS-8996
> URL: https://issues.apache.org/jira/browse/HDFS-8996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8996.00.patch, HDFS-8996.01.patch
>
>
> After HDFS-8965 is committed, {{scanEditLog}} will be identical to 
> {{validateEditLog}} in {{EditLogInputStream}} and {{FSEditlogLoader}}. This 
> is a place holder for us to remove the redundant {{scanEditLog}} code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8829) Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744684#comment-14744684
 ] 

Hudson commented on HDFS-8829:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #385 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/385/])
HDFS-8829. Make SO_RCVBUF and SO_SNDBUF size configurable for 
DataTransferProtocol sockets and allow configuring auto-tuning (He Tianyi via 
Colin P. McCabe) (cmccabe: rev 7b5cf5352efedc7d7ebdbb6b58f1b9a688812e75)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/PeerServer.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeTransferSocketSize.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DomainPeerServer.java


> Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol 
> sockets and allow configuring auto-tuning
> -
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Fix For: 2.8.0
>
> Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch, 
> HDFS-8829.0003.patch, HDFS-8829.0004.patch, HDFS-8829.0005.patch, 
> HDFS-8829.0006.patch
>
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744681#comment-14744681
 ] 

Hudson commented on HDFS-9010:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8451 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8451/])
HDFS-9010. Replace NameNode.DEFAULT_PORT with 
HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key. Contributed by 
Mingliang Liu. (wheat9: rev 76957a485b526468498f93e443544131a88b5684)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientFailover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/NNHAServiceTarget.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDefaultNameNodePort.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancerWithHANameNodes.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAppendSnapshotTruncate.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/NameNodeProxies.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java


> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch, 
> HDFS-9010.005.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8996) Consolidate validateLog and scanLog in FJM#EditLogFile

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744679#comment-14744679
 ] 

Hudson commented on HDFS-8996:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #391 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/391/])
HDFS-8996. Consolidate validateLog and scanLog in FJM#EditLogFile (Zhe Zhang 
via Colin P. McCabe) (cmccabe: rev 53bad4eb008ec553dcdbe01e7ae975dcecde6590)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckPointForSecurityTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java


> Consolidate validateLog and scanLog in FJM#EditLogFile
> --
>
> Key: HDFS-8996
> URL: https://issues.apache.org/jira/browse/HDFS-8996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8996.00.patch, HDFS-8996.01.patch
>
>
> After HDFS-8965 is committed, {{scanEditLog}} will be identical to 
> {{validateEditLog}} in {{EditLogInputStream}} and {{FSEditlogLoader}}. This 
> is a place holder for us to remove the redundant {{scanEditLog}} code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8829) Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744677#comment-14744677
 ] 

Hudson commented on HDFS-8829:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #391 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/391/])
HDFS-8829. Make SO_RCVBUF and SO_SNDBUF size configurable for 
DataTransferProtocol sockets and allow configuring auto-tuning (He Tianyi via 
Colin P. McCabe) (cmccabe: rev 7b5cf5352efedc7d7ebdbb6b58f1b9a688812e75)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DomainPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/PeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeTransferSocketSize.java


> Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol 
> sockets and allow configuring auto-tuning
> -
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Fix For: 2.8.0
>
> Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch, 
> HDFS-8829.0003.patch, HDFS-8829.0004.patch, HDFS-8829.0005.patch, 
> HDFS-8829.0006.patch
>
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7766) Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744669#comment-14744669
 ] 

Hadoop QA commented on HDFS-7766:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  22m 52s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 18s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 31s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m 12s | Site still builds. |
| {color:red}-1{color} | checkstyle |   2m 35s | The applied patch generated  2 
new checkstyle issues (total was 0, now 2). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 41s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 59s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 32s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 134m 30s | Tests failed in hadoop-hdfs. |
| {color:green}+1{color} | hdfs tests |   0m 29s | Tests passed in 
hadoop-hdfs-client. |
| | | 193m 42s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.security.TestDelegationToken |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.TestQuota |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755850/HDFS-7766.02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / 6955771 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12435/artifact/patchprocess/diffcheckstylehadoop-hdfs-client.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12435/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12435/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| hadoop-hdfs-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12435/artifact/patchprocess/testrun_hadoop-hdfs-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12435/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12435/console |


This message was automatically generated.

> Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect
> --
>
> Key: HDFS-7766
> URL: https://issues.apache.org/jira/browse/HDFS-7766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-7766.01.patch, HDFS-7766.02.patch
>
>
> Please see 
> https://issues.apache.org/jira/browse/HDFS-7588?focusedCommentId=14276192&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14276192
> A backwards compatible manner we can fix this is to add a flag on the request 
> which would disable the redirect, i.e.
> {noformat}
> curl -i -X PUT 
> "http://:/webhdfs/v1/?op=CREATE[&noredirect=]
> {noformat}
> returns 200 with the DN location in the response.
> This would allow the Browser clients to get the redirect URL to put the file 
> to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-14 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744646#comment-14744646
 ] 

Haohui Mai commented on HDFS-9010:
--

LGTM. +1. I'll commit it shortly.

> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch, 
> HDFS-9010.005.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-14 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744645#comment-14744645
 ] 

Walter Su commented on HDFS-9040:
-

bq. Yeah, I can understand your concern. In the replication mechanism, the 
async implementation matches the single write pipeline model, and the 
datastreamer can handle its failure perfectly. But with 9 streamers in 
parallel, we need to 1) sync all the streamers when writing a new block, and 2) 
stop all the streamers and assign them with new GS when failure happens. Thus I 
think we'd better add some sync code in DFSStripedOutputStream. Also in this 
way it becomes easier to calculate block length and set/reset external error 
state.
yeah. streamer synchronization is too slow. It can't be slower to do it in 
DFSStripedOutputStream. I'll take some time to review the patch.

bq. With BlockGroupDataStreamer I can make 9 internal streamers to wait for 
error-handling to be finished, until then I put empty_last_packet to all 9 
internal streamers to let them close blockStreams.
bq. I actually did similar thing: closeImpl() first let all the streamers to 
flush out all the data packets, then call checkStreamerFailures to handle any 
failure during the data transfer, and in the end to send out the last empty 
packet to close the packet. But the challenge here is, we could not use the 
same way to handle the failure for the last empty packet, since successful 
streamers may have closed the block already.
closeImpl() did well in handling last paritial blockGroup. What if the failure 
happens in last stripe of full blockGroup? The first # streamers ends but one 
of the last streamers fails.
{noformat}
writeChunk(..) --> super.writeChunk(..) --> enqueueCurrentPacketFull() --> 
endBlock() --> send empty_last_packet
{noformat}


> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040-HDFS-7285.002.patch, HDFS-9040.00.patch, 
> HDFS-9040.001.wip.patch, HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) throttle directoryScanner

2015-09-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744628#comment-14744628
 ] 

Colin Patrick McCabe commented on HDFS-8873:


bq. Encapsulating the throttle stuff requires a reasonable abstraction of what 
a throttle is. The various kinds of throttles (time, file, iops, ...) are all 
pretty different and aren't easy to overlay with a single abstraction. I 
decided to give up on the idea of making the throttle type selectable. The 
limit therefore always means the same thing, and so I think it's fair to leave 
it's name as is.

I agree that it is probably premature to have a single Throttle base class that 
can do all the various possible things you might want a Throttle to do.  But it 
doesn't follow from that that we need to give up on the idea of making the 
throttle type selectable in the future.

Also, configuration keys which are specified in terms of milliseconds should 
always end in ms.  It causes a huge amount of confusion when times come without 
units.  It is obvious to you-- the author of the code-- that it is in 
milliseconds, but it is not obvious to users or other developers.

Why not just have a single class called {{TimeBasedThrottle}} which does 
whatever you want your time-based throttle to do?  You can make it a standalone 
class that doesn't extend or implement anything, and even create an instance of 
it that does nothing when throttling is turned off.  But keep the flexible 
configuration mechanism that we discussed earlier.  That way, if someone wants 
to do something more elaborate later, they can.

{{hadoop-hdfs-project/hadoop-hdfs/now}}: did you intend to put this in the 
patch?

{code}
397  public static final String  DFS_DATANODE_DIRECTORYSCAN_THROTTLE_LIMIT_KEY =
398   "dfs.datanode.directoryscan.throttle.limit";
{code}
Should end with "ms" to indicate the limit is in milliseconds.  Or ideally 
"ms.per.sec"

{code}
-if (!retainDiffs) clear();
+
+if (!retainDiffs) {
+  clear();
+}
{code}
Can we move small whitespace cleanups like this to a follow-on change?  It just 
makes backports a pain because it creates unnecessary conflicts, and tends to 
obscure what this JIRA is about when people are reading the change log.

{code}
+} //end for
+  } //end synchronized
+} // end if
{code}
If we're changing this, then let's get rid of the COBOLisms.  A close brace is 
enough to know that the block is closed.

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744603#comment-14744603
 ] 

Mingliang Liu commented on HDFS-9010:
-

[HDFS-9067], [HDFS-9073], [HDFS-9074] track the 
{{hadoop.hdfs.server.datanode.fsdataset.impl.*}} failing tests.
[HDFS-9072] tracks the random failing {{hadoop.tools.TestJMXGet}} tests.
Other failing tests can not be reproduced locally (~10 times run).
{{hadoop.hdfs.server.blockmanagement.*}} tests fail because of 
{{java.lang.NoClassDefFoundError}} exception.

> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch, 
> HDFS-9010.005.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8899) Erasure Coding: use threadpool for EC recovery tasks

2015-09-14 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744594#comment-14744594
 ] 

Yi Liu commented on HDFS-8899:
--

Thanks [~rakeshr] for the work and [~zhz] to ping me.

Overall looks good.  My comment is: 
Currently we have hard limit for replication/striped block reconstruction work 
on each datanode,  the current maximum value is 4 by default if not configured.
So the default threadpool size in the patch is 20 which is too big and 
wasteful, also please don't create a fix thread pool.  Maybe we can create a 
threadpool with {{corePoolSize}} is 2, and {{maximumPoolSize}} is 8?

> Erasure Coding: use threadpool for EC recovery tasks
> 
>
> Key: HDFS-8899
> URL: https://issues.apache.org/jira/browse/HDFS-8899
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8899-HDFS-7285-01.patch, 
> HDFS-8899-HDFS-7285-merge-00.patch
>
>
> The idea is to use threadpool for processing erasure coding recovery tasks at 
> the datanode.
> {code}
> new Daemon(new ReconstructAndTransferBlock(recoveryInfo)).start();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8874) Add DN metrics for balancer and other block movement scenarios

2015-09-14 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated HDFS-8874:
---
Attachment: HDFS-8874-trunk-v4.patch

[~mingma] V4 attached to address your comments.

> Add DN metrics for balancer and other block movement scenarios
> --
>
> Key: HDFS-8874
> URL: https://issues.apache.org/jira/browse/HDFS-8874
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Chris Trezzo
> Attachments: HDFS-8874-trunk-v1.patch, HDFS-8874-trunk-v2.patch, 
> HDFS-8874-trunk-v3.patch, HDFS-8874-trunk-v4.patch
>
>
> For balancer, mover and migrator (HDFS-8789), we want to know how close it is 
> to the DN's throttling thresholds. Although DN has existing metrics such as 
> {{BytesWritten}}, {{BytesRead}}, {{CopyBlockOpNumOps}} and 
> {{ReplaceBlockOpNumOps}}, there is no metrics to indicate the number of bytes 
> moved.
> We can add {{ReplaceBlockBytesWritten}} and {{CopyBlockBytesRead}} to account 
> for the bytes moved in ReplaceBlock and CopyBlock operations. In addition, we 
> can also add throttling metrics for {{DataTransferThrottler}} and 
> {{BlockBalanceThrottler}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9053) Support large directories efficiently using B-Tree

2015-09-14 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14740842#comment-14740842
 ] 

Yi Liu edited comment on HDFS-9053 at 9/15/15 12:35 AM:


To integrate B-Tree with INodeDirectory, it's natural besides directory exposes 
{{getChildrenList}} and returns {{ReadOnlyList}}, B-Tree will be updated to 
implement Collection (now it implements Iterable), it's not a list. Actually 
most other places want to iterate all elements, and some places want to iterate 
starting from certain child. So my plan is adding {{ReadOnlyCollection}} and 
{{ReadOnlyList}} extends it, and getChildren returns {{ReadOnlyCollection}}. 
The {{ReadOnlyCollection}} has an interface to allow creating an Iterator 
starting from certain child. For snapshot, it makes sense to keep current 
behavior, still use list to keep the CREATED/DELETED diff.


was (Author: hitliuyi):
To integrate B-Tree with INodeDirectory, it's natural besides directory exposes 
{{getChildrenList}} and returns {{ReadOnlyList}}, B-Tree implements Iterable, 
it's not a list. Actually most other places want to iterate all elements, and 
some places want to iterate starting from certain child. So my plan is add an 
new Iterable interface which extends {{java.lang.Iterable}} and allow creating 
an iterator starting from certain child. Then in directory, getChildren returns 
{{ReadOnlyIterator}}, and {{ReadOnlyList}} implements it the ReadOnlyIterator. 
For snapshot, it makes sense to keep current behavior, still use the 
{{ReadOnlyList}}.

> Support large directories efficiently using B-Tree
> --
>
> Key: HDFS-9053
> URL: https://issues.apache.org/jira/browse/HDFS-9053
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Critical
> Attachments: HDFS-9053 (BTree with simple benchmark).patch, HDFS-9053 
> (BTree).patch
>
>
> This is a long standing issue, we were trying to improve this in the past.  
> Currently we use an ArrayList for the children under a directory, and the 
> children are ordered in the list, for insert/delete/search, the time 
> complexity is O(log n), but insertion/deleting causes re-allocations and 
> copies of big arrays, so the operations are costly.  For example, if the 
> children grow to 1M size, the ArrayList will resize to > 1M capacity, so need 
> > 1M * 4bytes = 4M continuous heap memory, it easily causes full GC in HDFS 
> cluster where namenode heap memory is already highly used.  I recap the 3 
> main issues:
> # Insertion/deletion operations in large directories are expensive because 
> re-allocations and copies of big arrays.
> # Dynamically allocate several MB continuous heap memory which will be 
> long-lived can easily cause full GC problem.
> # Even most children are removed later, but the directory INode still 
> occupies same size heap memory, since the ArrayList will never shrink.
> This JIRA is similar to HDFS-7174 created by [~kihwal], but use B-Tree to 
> solve the problem suggested by [~shv]. 
> So the target of this JIRA is to implement a low memory footprint B-Tree and 
> use it to replace ArrayList. 
> If the elements size is not large (less than the maximum degree of B-Tree 
> node), the B-Tree only has one root node which contains an array for the 
> elements. And if the size grows large enough, it will split automatically, 
> and if elements are removed, then B-Tree nodes can merge automatically (see 
> more: https://en.wikipedia.org/wiki/B-tree).  It will solve the above 3 
> issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744547#comment-14744547
 ] 

Hadoop QA commented on HDFS-9010:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 49s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 5 new or modified test files. |
| {color:red}-1{color} | javac |   7m 46s | The applied patch generated  4  
additional warning messages. |
| {color:green}+1{color} | javadoc |  10m 16s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 20s | The applied patch generated  2 
new checkstyle issues (total was 274, now 274). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 27s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 26s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  7s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 164m 29s | Tests failed in hadoop-hdfs. |
| | | 209m 42s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter |
|   | hadoop.tools.TestJMXGet |
|   | hadoop.hdfs.server.namenode.TestStartup |
|   | hadoop.hdfs.server.blockmanagement.TestPendingReplication |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755824/HDFS-9010.005.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6955771 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12433/artifact/patchprocess/diffJavacWarnings.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12433/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12433/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12433/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12433/console |


This message was automatically generated.

> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch, 
> HDFS-9010.005.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8873) throttle directoryScanner

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744544#comment-14744544
 ] 

Hadoop QA commented on HDFS-8873:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 47s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 51s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  8s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 26s | The applied patch generated  4 
new checkstyle issues (total was 442, now 422). |
| {color:red}-1{color} | whitespace |   0m 10s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   2m 41s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 20s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 168m  7s | Tests failed in hadoop-hdfs. |
| | | 214m  2s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| Failed unit tests | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
|
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | hadoop.hdfs.server.namenode.TestSaveNamespace |
|   | hadoop.hdfs.server.datanode.TestIncrementalBrVariations |
|   | hadoop.tools.TestJMXGet |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755813/HDFS-8873.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6955771 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12432/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12432/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12432/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12432/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12432/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12432/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12432/console |


This message was automatically generated.

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7351) Document the HDFS Erasure Coding feature

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744543#comment-14744543
 ] 

Hadoop QA commented on HDFS-7351:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   3m 37s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | release audit |   0m 14s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | site |   3m  0s | Site still builds. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   6m 56s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755800/HDFS-7351-HDFS-7285-02.patch
 |
| Optional Tests | site |
| git revision | HDFS-7285 / ce02b55 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12438/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12438/console |


This message was automatically generated.

> Document the HDFS Erasure Coding feature
> 
>
> Key: HDFS-7351
> URL: https://issues.apache.org/jira/browse/HDFS-7351
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-7351-HDFS-7285-01.patch, 
> HDFS-7351-HDFS-7285-02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8829) Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744540#comment-14744540
 ] 

Hudson commented on HDFS-8829:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8450 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8450/])
HDFS-8829. Make SO_RCVBUF and SO_SNDBUF size configurable for 
DataTransferProtocol sockets and allow configuring auto-tuning (He Tianyi via 
Colin P. McCabe) (cmccabe: rev 7b5cf5352efedc7d7ebdbb6b58f1b9a688812e75)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/TcpPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DomainPeerServer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/PeerServer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeTransferSocketSize.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol 
> sockets and allow configuring auto-tuning
> -
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Fix For: 2.8.0
>
> Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch, 
> HDFS-8829.0003.patch, HDFS-8829.0004.patch, HDFS-8829.0005.patch, 
> HDFS-8829.0006.patch
>
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-14 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9040:

Attachment: HDFS-9040-HDFS-7285.002.patch

Update the patch to fix some current unit tests.

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040-HDFS-7285.002.patch, HDFS-9040.00.patch, 
> HDFS-9040.001.wip.patch, HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-14 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9040:

Status: Patch Available  (was: Open)

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040-HDFS-7285.002.patch, HDFS-9040.00.patch, 
> HDFS-9040.001.wip.patch, HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6744) Improve decommissioning nodes and dead nodes access on the new NN webUI

2015-09-14 Thread Siqi Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744512#comment-14744512
 ] 

Siqi Li commented on HDFS-6744:
---

No, HDFS-6407 did not solve this issue.

> Improve decommissioning nodes and dead nodes access on the new NN webUI
> ---
>
> Key: HDFS-6744
> URL: https://issues.apache.org/jira/browse/HDFS-6744
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6744.v1.patch, deadnodespage.png, 
> decomnodespage.png, livendoespage.png
>
>
> The new NN webUI lists live node at the top of the page, followed by dead 
> node and decommissioning node. From admins point of view:
> 1. Decommissioning nodes and dead nodes are more interesting. It is better to 
> move decommissioning nodes to the top of the page, followed by dead nodes and 
> decommissioning nodes.
> 2. To find decommissioning nodes or dead nodes, the whole page that includes 
> all nodes needs to be loaded. That could take some time for big clusters.
> The legacy web UI filters out the type of nodes dynamically. That seems to 
> work well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9051) webhdfs should support recursive list

2015-09-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744510#comment-14744510
 ] 

Colin Patrick McCabe commented on HDFS-9051:


I think it's problematic to move more processing to the NameNode, given that it 
is a bottleneck for scalability currently.  I don't see why this can't be 
implemented more effectively on the client.

> webhdfs should support recursive list
> -
>
> Key: HDFS-9051
> URL: https://issues.apache.org/jira/browse/HDFS-9051
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>Assignee: Surendra Singh Lilhore
>
> There currently doesn't appear to be a way to recursive list a directory via 
> webhdfs without making an individual liststatus call per dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9058) enable find via WebHDFS

2015-09-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744509#comment-14744509
 ] 

Colin Patrick McCabe commented on HDFS-9058:


I think it's problematic to move more processing to the NameNode, given that it 
is a bottleneck for scalability currently.  I don't see why this can't be 
implemented more effectively on the client.

> enable find via WebHDFS
> ---
>
> Key: HDFS-9058
> URL: https://issues.apache.org/jira/browse/HDFS-9058
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Brahma Reddy Battula
>
> It'd be useful to implement find over webhdfs rather than forcing the client 
> to grab a lot of data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9055) WebHDFS REST v2

2015-09-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744506#comment-14744506
 ] 

Colin Patrick McCabe commented on HDFS-9055:


A big part of the goal of webhdfs is to support distCp between clusters on 
different major versions of Hadoop.  We never guaranteed that with our RPC 
protocol (no, not even RPCv9), so we need a protocol that does guarantee it.  
Bumping the major version of webhdfs does nothing to advance that goal, and in 
fact it makes it harder by creating a different, incompatible "flavor" of 
webhdfs.  We retired the trusty old HFTP protocol because webhdfs was the 
do-all, be-all solution for cross-version compatibility.  I think it's not 
unfair to ask webhdfs to actually provide that compatibility!

Some of the JIRAs here seem to be moving processing from the client to the 
NameNode.  For example, HDFS-9058 proposes that we implement "find" on the 
NameNode and send the results back to the client.  HDFS-9051 proposes that the 
NameNode support recursive listing of directories (probably similar to 
getContent etc.).  I am concerned that this will lead to extra complexity on 
the NameNode, and-- at least if implemented in the obvious way-- longer 
latencies since the FSN lock will be held for longer for the "do it all" 
operation.  I am not convinced that the benefits outweigh the extra complexity 
and potential scalability issues.

As [~cnauroth] commented, all of the JIRAs up here besides HDFS-7822  are just 
new features (like adding snapshot or truncate support) that can easily be done 
without bumping the major version of libwebhdfs.  And even for that JIRA, 
[~kihwal]'s original proposal for HDFS-7822 was a compatible one.

Unless there is something I am missing, we should simply add the features that 
make sense to WebHDFS v1 rather than creating an incompatible version.

> WebHDFS REST v2
> ---
>
> Key: HDFS-9055
> URL: https://issues.apache.org/jira/browse/HDFS-9055
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>
> There's starting to be enough changes to fix and add missing functionality to 
> webhdfs that we should probably update to REST v2.  This also gives us an 
> opportunity to deal with some incompatible issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9076) Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744497#comment-14744497
 ] 

Hadoop QA commented on HDFS-9076:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 47s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  0s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 19s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 27s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 27s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  9s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 164m 12s | Tests failed in hadoop-hdfs. |
| | | 209m  0s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter |
|   | hadoop.tools.TestJMXGet |
|   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755786/HDFS-9076.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6955771 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12430/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12430/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12430/console |


This message was automatically generated.

> Log full path instead of inodeId in DFSClient#closeAllFilesBeingWritten()
> -
>
> Key: HDFS-9076
> URL: https://issues.apache.org/jira/browse/HDFS-9076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9076.patch
>
>
> {code}
>try {
>   if (abort) {
> out.abort();
>   } else {
> out.close();
>   }
> } catch(IOException ie) {
>   LOG.error("Failed to " + (abort? "abort": "close") +
>   " inode " + inodeId, ie);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9011) Support splitting BlockReport of a storage into multiple RPC

2015-09-14 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744491#comment-14744491
 ] 

Jing Zhao commented on HDFS-9011:
-

Thanks for the comments, Colin. I'm also leaning towards not fixing this by now 
especially considering the reportDiff issue is non-trivial.

> Support splitting BlockReport of a storage into multiple RPC
> 
>
> Key: HDFS-9011
> URL: https://issues.apache.org/jira/browse/HDFS-9011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-9011.000.patch, HDFS-9011.001.patch, 
> HDFS-9011.002.patch
>
>
> Currently if a DataNode has too many blocks (more than 1m by default), it 
> sends multiple RPC to the NameNode for the block report, each RPC contains 
> report for a single storage. However, in practice we've seen sometimes even a 
> single storage can contains large amount of blocks and the report even 
> exceeds the max RPC data length. It may be helpful to support sending 
> multiple RPC for the block report of a storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9065) Include commas on # of files, blocks, total filesystem objects in NN Web UI

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744488#comment-14744488
 ] 

Hadoop QA commented on HDFS-9065:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   0m 27s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755854/HDFS-9065.003.patch |
| Optional Tests |  |
| git revision | trunk / 7b5cf53 |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12437/console |


This message was automatically generated.

> Include commas on # of files, blocks, total filesystem objects in NN Web UI
> ---
>
> Key: HDFS-9065
> URL: https://issues.apache.org/jira/browse/HDFS-9065
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-9065.001.patch, HDFS-9065.002.patch, 
> HDFS-9065.003.patch, HDFS-9065.003.patch
>
>
> Include commas on the number of files, blocks, and total filesystem objects 
> in the NN Web UI (please see example below) to make the numbers easier to 
> read.
> Current format:
> 3236 files and directories, 1409 blocks = 4645 total filesystem object(s).
> Proposed format:
> 3,236 files and directories, 1,409 blocks = 4,645 total filesystem object(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8996) Consolidate validateLog and scanLog in FJM#EditLogFile

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744484#comment-14744484
 ] 

Hudson commented on HDFS-8996:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1123 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1123/])
HDFS-8996. Consolidate validateLog and scanLog in FJM#EditLogFile (Zhe Zhang 
via Colin P. McCabe) (cmccabe: rev 53bad4eb008ec553dcdbe01e7ae975dcecde6590)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckPointForSecurityTokens.java


> Consolidate validateLog and scanLog in FJM#EditLogFile
> --
>
> Key: HDFS-8996
> URL: https://issues.apache.org/jira/browse/HDFS-8996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8996.00.patch, HDFS-8996.01.patch
>
>
> After HDFS-8965 is committed, {{scanEditLog}} will be identical to 
> {{validateEditLog}} in {{EditLogInputStream}} and {{FSEditlogLoader}}. This 
> is a place holder for us to remove the redundant {{scanEditLog}} code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9011) Support splitting BlockReport of a storage into multiple RPC

2015-09-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744469#comment-14744469
 ] 

Colin Patrick McCabe edited comment on HDFS-9011 at 9/14/15 11:10 PM:
--

You can just raise the maximum RPC size via {{ipc.maximum.data.length}}, as 
added in HADOOP-9676, right?  It is true that processing such a large report 
will take a long time on the NameNode, but this patch does not address that 
problem either.

I am very skeptical about adding more complexity to the full block report path, 
unless it can really address the main problem: the length of time which the 
NameNode holds the lock for when processing a long storage report.

btw, please hold off on committing this for now, until we have a chance to 
discuss it


was (Author: cmccabe):
You can just raise the maximum RPC size via {{ipc.maximum.data.length}}, as 
added in HADOOP-9676, right?  It is true that processing such a large report 
will take a long time on the NameNode, but this patch does not address that 
problem either.

I am very skeptical about adding more complexity to the full block report path, 
unless it can really address the main problem: the length of time which the 
NameNode holds the lock for when processing a long storage report.

> Support splitting BlockReport of a storage into multiple RPC
> 
>
> Key: HDFS-9011
> URL: https://issues.apache.org/jira/browse/HDFS-9011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-9011.000.patch, HDFS-9011.001.patch, 
> HDFS-9011.002.patch
>
>
> Currently if a DataNode has too many blocks (more than 1m by default), it 
> sends multiple RPC to the NameNode for the block report, each RPC contains 
> report for a single storage. However, in practice we've seen sometimes even a 
> single storage can contains large amount of blocks and the report even 
> exceeds the max RPC data length. It may be helpful to support sending 
> multiple RPC for the block report of a storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9011) Support splitting BlockReport of a storage into multiple RPC

2015-09-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744469#comment-14744469
 ] 

Colin Patrick McCabe commented on HDFS-9011:


You can just raise the maximum RPC size via {{ipc.maximum.data.length}}, as 
added in HADOOP-9676, right?  It is true that processing such a large report 
will take a long time on the NameNode, but this patch does not address that 
problem either.

I am very skeptical about adding more complexity to the full block report path, 
unless it can really address the main problem: the length of time which the 
NameNode holds the lock for when processing a long storage report.

> Support splitting BlockReport of a storage into multiple RPC
> 
>
> Key: HDFS-9011
> URL: https://issues.apache.org/jira/browse/HDFS-9011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-9011.000.patch, HDFS-9011.001.patch, 
> HDFS-9011.002.patch
>
>
> Currently if a DataNode has too many blocks (more than 1m by default), it 
> sends multiple RPC to the NameNode for the block report, each RPC contains 
> report for a single storage. However, in practice we've seen sometimes even a 
> single storage can contains large amount of blocks and the report even 
> exceeds the max RPC data length. It may be helpful to support sending 
> multiple RPC for the block report of a storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8996) Consolidate validateLog and scanLog in FJM#EditLogFile

2015-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744463#comment-14744463
 ] 

Hudson commented on HDFS-8996:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8449 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8449/])
HDFS-8996. Consolidate validateLog and scanLog in FJM#EditLogFile (Zhe Zhang 
via Colin P. McCabe) (cmccabe: rev 53bad4eb008ec553dcdbe01e7ae975dcecde6590)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckPointForSecurityTokens.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java


> Consolidate validateLog and scanLog in FJM#EditLogFile
> --
>
> Key: HDFS-8996
> URL: https://issues.apache.org/jira/browse/HDFS-8996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8996.00.patch, HDFS-8996.01.patch
>
>
> After HDFS-8965 is committed, {{scanEditLog}} will be identical to 
> {{validateEditLog}} in {{EditLogInputStream}} and {{FSEditlogLoader}}. This 
> is a place holder for us to remove the redundant {{scanEditLog}} code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8829) Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning

2015-09-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744458#comment-14744458
 ] 

Colin Patrick McCabe commented on HDFS-8829:


+1.  Committed to 2.8.  Failing tests are known flaky tests, and checkstyle is 
whining about file length (which we can't do anything about in this patch)

> Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol 
> sockets and allow configuring auto-tuning
> -
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Fix For: 2.8.0
>
> Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch, 
> HDFS-8829.0003.patch, HDFS-8829.0004.patch, HDFS-8829.0005.patch, 
> HDFS-8829.0006.patch
>
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8829) Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning

2015-09-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8829:
---
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

> Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol 
> sockets and allow configuring auto-tuning
> -
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Fix For: 2.8.0
>
> Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch, 
> HDFS-8829.0003.patch, HDFS-8829.0004.patch, HDFS-8829.0005.patch, 
> HDFS-8829.0006.patch
>
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2015-09-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744456#comment-14744456
 ] 

Allen Wittenauer commented on HDFS-9047:


There's not really much gap to fill though.  It wasn't documented and mvn 
package never put it in the tarball.  Besides, I've personally had much better 
luck using some random code on Github for WebHDFS compatibility than this 
library

> deprecate libwebhdfs in branch-2; remove from trunk
> ---
>
> Key: HDFS-9047
> URL: https://issues.apache.org/jira/browse/HDFS-9047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>
> This library is basically a mess:
> * It's not part of the mvn package
> * It's missing functionality and barely maintained
> * It's not in the precommit runs so doesn't get exercised regularly
> * It's not part of the unit tests (at least, that I can see)
> * It isn't documented in any official documentation
> But most importantly:  
> * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open)
> Let's cut our losses and just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-14 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1473#comment-1473
 ] 

Zhe Zhang commented on HDFS-9040:
-

bq. The reason I prefer not to do locateFollowingBlock in DFSOutputStream is, 
DFSOutputStream is async with DataStreamer
bq. Yeah, I can understand your concern. In the replication mechanism, the 
async implementation matches the single write pipeline model, and the 
datastreamer can handle its failure perfectly. But with 9 streamers in 
parallel, we need to 1) sync all the streamers when writing a new block, and 2) 
stop all the streamers and assign them with new GS when failure happens. Thus I 
think we'd better add some sync code in DFSStripedOutputStream. Also in this 
way it becomes easier to calculate block length and set/reset external error 
state.
Very good discussion here. Jing's patch leaves the behavior of non-EC 
{{DFSOutputStream}} and {{DataStreamer}} unchanged: the streamer is still in 
charge of locating following blocks. I think we should probably change that as 
well so that {{OutputStream}} and streamer have consistent roles under both 
contiguous and striped layouts.

bq. Currently the fastest streamer also has to wait for other streamers before 
requesting a following block group from NN, so I think we may not feel the 
writing speed becomes slow.
Considering the buffer in {{DFSOutputStream}}, the above is only partially 
true. Performance-wise it still makes sense to decouple 
{{locateFollowingBlock}} from the main {{DFSOutputStream}} thread. How about 
starting a separate thread to allocate new block?

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040.00.patch, HDFS-9040.001.wip.patch, 
> HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8829) Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning

2015-09-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8829:
---
Summary: Make SO_RCVBUF and SO_SNDBUF size configurable for 
DataTransferProtocol sockets and allow configuring auto-tuning  (was: Make 
SO_RCVBUF size configurable for DataTransferProtocol sockets and allow 
configuring auto-tuning)

> Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol 
> sockets and allow configuring auto-tuning
> -
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch, 
> HDFS-8829.0003.patch, HDFS-8829.0004.patch, HDFS-8829.0005.patch, 
> HDFS-8829.0006.patch
>
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8996) Consolidate validateLog and scanLog in FJM#EditLogFile

2015-09-14 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744433#comment-14744433
 ] 

Zhe Zhang commented on HDFS-8996:
-

Thanks Colin for reviewing the patch again.

> Consolidate validateLog and scanLog in FJM#EditLogFile
> --
>
> Key: HDFS-8996
> URL: https://issues.apache.org/jira/browse/HDFS-8996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8996.00.patch, HDFS-8996.01.patch
>
>
> After HDFS-8965 is committed, {{scanEditLog}} will be identical to 
> {{validateEditLog}} in {{EditLogInputStream}} and {{FSEditlogLoader}}. This 
> is a place holder for us to remove the redundant {{scanEditLog}} code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8829) Make SO_RCVBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning

2015-09-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8829:
---
Summary: Make SO_RCVBUF size configurable for DataTransferProtocol sockets 
and allow configuring auto-tuning  (was: DataNode sets SO_RCVBUF explicitly is 
disabling tcp auto-tuning)

> Make SO_RCVBUF size configurable for DataTransferProtocol sockets and allow 
> configuring auto-tuning
> ---
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch, 
> HDFS-8829.0003.patch, HDFS-8829.0004.patch, HDFS-8829.0005.patch, 
> HDFS-8829.0006.patch
>
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8996) Consolidate validateLog and scanLog in FJM#EditLogFile

2015-09-14 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-8996:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> Consolidate validateLog and scanLog in FJM#EditLogFile
> --
>
> Key: HDFS-8996
> URL: https://issues.apache.org/jira/browse/HDFS-8996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.8.0
>
> Attachments: HDFS-8996.00.patch, HDFS-8996.01.patch
>
>
> After HDFS-8965 is committed, {{scanEditLog}} will be identical to 
> {{validateEditLog}} in {{EditLogInputStream}} and {{FSEditlogLoader}}. This 
> is a place holder for us to remove the redundant {{scanEditLog}} code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8996) Consolidate validateLog and scanLog in FJM#EditLogFile

2015-09-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744431#comment-14744431
 ] 

Colin Patrick McCabe commented on HDFS-8996:


Yes, the JN needs to ignore the layout version so that it can continue to 
function during a layout version upgrade.  +1.

TestJMXGet failure is HDFS-9072.
Lazy persist failure is HDFS-9073.
TestReplaceDatanodeOnFailure failure is HDFS-7455.

Committed to 2.8

> Consolidate validateLog and scanLog in FJM#EditLogFile
> --
>
> Key: HDFS-8996
> URL: https://issues.apache.org/jira/browse/HDFS-8996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8996.00.patch, HDFS-8996.01.patch
>
>
> After HDFS-8965 is committed, {{scanEditLog}} will be identical to 
> {{validateEditLog}} in {{EditLogInputStream}} and {{FSEditlogLoader}}. This 
> is a place holder for us to remove the redundant {{scanEditLog}} code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744421#comment-14744421
 ] 

Hadoop QA commented on HDFS-9010:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m  6s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 6 new or modified test files. |
| {color:green}+1{color} | javac |   7m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  5s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 38s | The applied patch generated  2 
new checkstyle issues (total was 274, now 275). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 18s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 13s | Pre-build of native portion |
| {color:red}-1{color} | mapreduce tests | 109m 12s | Tests failed in 
hadoop-mapreduce-client-jobclient. |
| {color:red}-1{color} | hdfs tests |  85m 14s | Tests failed in hadoop-hdfs. |
| | | 242m 36s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.mapred.TestNetworkedJob |
|   | hadoop.mapred.TestClusterMapReduceTestCase |
|   | hadoop.tools.TestJMXGet |
| Timed out tests | org.apache.hadoop.mapred.TestMRIntermediateDataEncryption |
|   | org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12754235/HDFS-9010.004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6955771 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12429/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-mapreduce-client-jobclient test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12429/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12429/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12429/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12429/console |


This message was automatically generated.

> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch, 
> HDFS-9010.005.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9065) Include commas on # of files, blocks, total filesystem objects in NN Web UI

2015-09-14 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-9065:
---
Attachment: HDFS-9065.003.patch

Making me work for it. :)  Resolved.

> Include commas on # of files, blocks, total filesystem objects in NN Web UI
> ---
>
> Key: HDFS-9065
> URL: https://issues.apache.org/jira/browse/HDFS-9065
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-9065.001.patch, HDFS-9065.002.patch, 
> HDFS-9065.003.patch, HDFS-9065.003.patch
>
>
> Include commas on the number of files, blocks, and total filesystem objects 
> in the NN Web UI (please see example below) to make the numbers easier to 
> read.
> Current format:
> 3236 files and directories, 1409 blocks = 4645 total filesystem object(s).
> Proposed format:
> 3,236 files and directories, 1,409 blocks = 4,645 total filesystem object(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6744) Improve decommissioning nodes and dead nodes access on the new NN webUI

2015-09-14 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744408#comment-14744408
 ] 

Ravi Prakash commented on HDFS-6744:


Siqi! Does HDFS-6407 solve this issue as well?

> Improve decommissioning nodes and dead nodes access on the new NN webUI
> ---
>
> Key: HDFS-6744
> URL: https://issues.apache.org/jira/browse/HDFS-6744
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6744.v1.patch, deadnodespage.png, 
> decomnodespage.png, livendoespage.png
>
>
> The new NN webUI lists live node at the top of the page, followed by dead 
> node and decommissioning node. From admins point of view:
> 1. Decommissioning nodes and dead nodes are more interesting. It is better to 
> move decommissioning nodes to the top of the page, followed by dead nodes and 
> decommissioning nodes.
> 2. To find decommissioning nodes or dead nodes, the whole page that includes 
> all nodes needs to be loaded. That could take some time for big clusters.
> The legacy web UI filters out the type of nodes dynamically. That seems to 
> work well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6746) Support datanode list pagination and filtering for big clusters on NN webUI

2015-09-14 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HDFS-6746.

Resolution: Duplicate

Duping to HDFS-6407

> Support datanode list pagination and filtering for big clusters on NN webUI
> ---
>
> Key: HDFS-6746
> URL: https://issues.apache.org/jira/browse/HDFS-6746
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>
> This isn't a major issue yet. Still it might be good to add support for 
> pagination at some point and maybe some filtering. For example, that is 
> useful to filter out live nodes that belong to the same rack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6742) Support sorting datanode list on the new NN webUI

2015-09-14 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HDFS-6742.

Resolution: Duplicate

Duping to HDFS-6407

> Support sorting datanode list on the new NN webUI
> -
>
> Key: HDFS-6742
> URL: https://issues.apache.org/jira/browse/HDFS-6742
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Chen He
>
> The legacy webUI allows sorting datanode list based on specific column such 
> as hostname. It is handy for admins can find pattern more quickly, especially 
> for big clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9065) Include commas on # of files, blocks, total filesystem objects in NN Web UI

2015-09-14 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744392#comment-14744392
 ] 

Haohui Mai commented on HDFS-9065:
--

Looks good to me. One nit:

{code}
+data['fs']['ObjectsTotal'] =
+data['fs']['FilesTotal'] + data['fs']['BlocksTotal'];
{code}

Can it be

{code}
data.fs.ObjectsTotal = +data.fs.FilesTotal + data.fs.BlocksTotal;
{code}

?

+1 after addressed.

> Include commas on # of files, blocks, total filesystem objects in NN Web UI
> ---
>
> Key: HDFS-9065
> URL: https://issues.apache.org/jira/browse/HDFS-9065
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-9065.001.patch, HDFS-9065.002.patch, 
> HDFS-9065.003.patch
>
>
> Include commas on the number of files, blocks, and total filesystem objects 
> in the NN Web UI (please see example below) to make the numbers easier to 
> read.
> Current format:
> 3236 files and directories, 1409 blocks = 4645 total filesystem object(s).
> Proposed format:
> 3,236 files and directories, 1,409 blocks = 4,645 total filesystem object(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7766) Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect

2015-09-14 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7766:
---
Attachment: HDFS-7766.02.patch

Sorry about that. Missed putting a new file into the patch

> Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect
> --
>
> Key: HDFS-7766
> URL: https://issues.apache.org/jira/browse/HDFS-7766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-7766.01.patch, HDFS-7766.02.patch
>
>
> Please see 
> https://issues.apache.org/jira/browse/HDFS-7588?focusedCommentId=14276192&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14276192
> A backwards compatible manner we can fix this is to add a flag on the request 
> which would disable the redirect, i.e.
> {noformat}
> curl -i -X PUT 
> "http://:/webhdfs/v1/?op=CREATE[&noredirect=]
> {noformat}
> returns 200 with the DN location in the response.
> This would allow the Browser clients to get the redirect URL to put the file 
> to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7766) Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect

2015-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744349#comment-14744349
 ] 

Hadoop QA commented on HDFS-7766:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  24m 14s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   2m  0s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755835/HDFS-7766.01.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / 6955771 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12434/console |


This message was automatically generated.

> Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect
> --
>
> Key: HDFS-7766
> URL: https://issues.apache.org/jira/browse/HDFS-7766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-7766.01.patch
>
>
> Please see 
> https://issues.apache.org/jira/browse/HDFS-7588?focusedCommentId=14276192&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14276192
> A backwards compatible manner we can fix this is to add a flag on the request 
> which would disable the redirect, i.e.
> {noformat}
> curl -i -X PUT 
> "http://:/webhdfs/v1/?op=CREATE[&noredirect=]
> {noformat}
> returns 200 with the DN location in the response.
> This would allow the Browser clients to get the redirect URL to put the file 
> to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-14 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744316#comment-14744316
 ] 

Zhe Zhang commented on HDFS-9040:
-

bq. With BlockGroupDataStreamer I can make 9 internal streamers to wait for 
error-handling to be finished, until then I put empty_last_packet to all 9 
internal streamers to let them close blockStreams.
bq. I actually did similar thing: closeImpl() first let all the streamers to 
flush out all the data packets, then call checkStreamerFailures to handle any 
failure during the data transfer, and in the end to send out the last empty 
packet to close the packet. But the challenge here is, we could not use the 
same way to handle the failure for the last empty packet, since successful 
streamers may have closed the block already.
If we can preallocate a fixed number of GS's (e.g. {{NUM_PARITY_BLOCKS}}), we 
can bump a streamer's GS by {{NUM_PARITY_BLOCKS}} when it successfully closes. 
When all healthy streamers successfully close, we should bump the NN version of 
GS. We might need to add hooks in NN accordingly.

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040.00.patch, HDFS-9040.001.wip.patch, 
> HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-14 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744298#comment-14744298
 ] 

Zhe Zhang commented on HDFS-9040:
-

I just created HDFS-9079 to explore preallocating GS. Refreshing token is a 
good point, I'll verify if that's a problem.

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040.00.patch, HDFS-9040.001.wip.patch, 
> HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9079) Erasure coding: preallocate multiple generation stamps when creating striped blocks

2015-09-14 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-9079:
---

 Summary: Erasure coding: preallocate multiple generation stamps 
when creating striped blocks
 Key: HDFS-9079
 URL: https://issues.apache.org/jira/browse/HDFS-9079
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang


A non-striped DataStreamer goes through the following steps in error handling:
{code}
1) Finds error => 2) Asks NN for new GS => 3) Gets new GS from NN => 4) Applies 
new GS to DN (createBlockOutputStream) => 5) Ack from DN => 6) Updates block on 
NN
{code}
To simplify the above we can preallocate GS when NN creates a new striped block 
group ({{FSN#createNewBlock}}). For each new striped block group we can reserve 
{{NUM_PARITY_BLOCKS}} GS's. Then steps 1~3 in the above sequence can be saved. 
If more than {{NUM_PARITY_BLOCKS}} errors have happened we shouldn't try to 
further recover anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9040) Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests to Coordinator)

2015-09-14 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744281#comment-14744281
 ] 

Jing Zhao commented on HDFS-9040:
-

Thanks for the comments, Zhe, Walter and Bo!

bq. With BlockGroupDataStreamer I can make 9 internal streamers to wait for 
error-handling to be finished, until then I put empty_last_packet to all 9 
internal streamers to let them close blockStreams.

I actually did similar thing: closeImpl() first let all the streamers to flush 
out all the data packets, then call checkStreamerFailures to handle any failure 
during the data transfer, and in the end to send out the last empty packet to 
close the packet. But the challenge here is, we could not use the same way to 
handle the failure for the last empty packet, since successful streamers may 
have closed the block already.

bq. preallocate GS when NN creates a new striped block group 
(FSN#createNewBlock).

This is a very good idea. We should spend more time exploring this 
optimization, but maybe as a follow-on task?

bq. The reason I prefer not to do locateFollowingBlock in DFSOutputStream is, 
DFSOutputStream is async with DataStreamer

Yeah, I can understand your concern. In the replication mechanism, the async 
implementation matches the single write pipeline model, and the datastreamer 
can handle its failure perfectly. But with 9 streamers in parallel, we need to 
1) sync all the streamers when writing a new block, and 2) stop all the 
streamers and assign them with new GS when failure happens. Thus I think we'd 
better add some sync code in DFSStripedOutputStream. Also in this way it 
becomes easier to calculate block length and set/reset external error state.

> Erasure coding: Refactor DFSStripedOutputStream (Move Namenode RPC Requests 
> to Coordinator)
> ---
>
> Key: HDFS-9040
> URL: https://issues.apache.org/jira/browse/HDFS-9040
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Walter Su
> Attachments: HDFS-9040.00.patch, HDFS-9040.001.wip.patch, 
> HDFS-9040.02.bgstreamer.patch
>
>
> The general idea is to simplify error handling logic.
> Proposal 1:
> A BlockGroupDataStreamer to communicate with NN to allocate/update block, and 
> StripedDataStreamer s only have to stream blocks to DNs.
> Proposal 2:
> See below the 
> [comment|https://issues.apache.org/jira/browse/HDFS-9040?focusedCommentId=14741388&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14741388]
>  from [~jingzhao].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7766) Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect

2015-09-14 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7766:
---
Status: Patch Available  (was: Open)

Once we agree on this, I can work on adding unit tests

> Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect
> --
>
> Key: HDFS-7766
> URL: https://issues.apache.org/jira/browse/HDFS-7766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-7766.01.patch
>
>
> Please see 
> https://issues.apache.org/jira/browse/HDFS-7588?focusedCommentId=14276192&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14276192
> A backwards compatible manner we can fix this is to add a flag on the request 
> which would disable the redirect, i.e.
> {noformat}
> curl -i -X PUT 
> "http://:/webhdfs/v1/?op=CREATE[&noredirect=]
> {noformat}
> returns 200 with the DN location in the response.
> This would allow the Browser clients to get the redirect URL to put the file 
> to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7766) Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect

2015-09-14 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-7766:
---
Attachment: HDFS-7766.01.patch

Here's a patch which adds a new {{noredirect}} flag to {{CREATE}}, {{APPEND}}, 
{{GET}} and {{GETFILECHECKSUM}}

> Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect
> --
>
> Key: HDFS-7766
> URL: https://issues.apache.org/jira/browse/HDFS-7766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-7766.01.patch
>
>
> Please see 
> https://issues.apache.org/jira/browse/HDFS-7588?focusedCommentId=14276192&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14276192
> A backwards compatible manner we can fix this is to add a flag on the request 
> which would disable the redirect, i.e.
> {noformat}
> curl -i -X PUT 
> "http://:/webhdfs/v1/?op=CREATE[&noredirect=]
> {noformat}
> returns 200 with the DN location in the response.
> This would allow the Browser clients to get the redirect URL to put the file 
> to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9047) deprecate libwebhdfs in branch-2; remove from trunk

2015-09-14 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744247#comment-14744247
 ] 

Colin Patrick McCabe commented on HDFS-9047:


libwebhdfs fills a purpose that no other C library currently fills.  It can be 
used without the same version of Hadoop jars on the system as the server code.  
While I agree that the current implementation is flawed (in particular, 
HDFS-3917 is a big gap), we should have something to replace it (which is 
actually ready and checked into trunk) before we remove it.  It's a 
self-contained piece of code and the maintenance burden is almost 0.  The same 
can't be said for a lot of other things in that we still keep around. -1 until 
we have a replacement (which could very well be one of the native library 
efforts, when they're actually ready).

> deprecate libwebhdfs in branch-2; remove from trunk
> ---
>
> Key: HDFS-9047
> URL: https://issues.apache.org/jira/browse/HDFS-9047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Allen Wittenauer
>
> This library is basically a mess:
> * It's not part of the mvn package
> * It's missing functionality and barely maintained
> * It's not in the precommit runs so doesn't get exercised regularly
> * It's not part of the unit tests (at least, that I can see)
> * It isn't documented in any official documentation
> But most importantly:  
> * It fails at it's primary mission of being pure C (HDFS-3917 is STILL open)
> Let's cut our losses and just remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9010) Replace NameNode.DEFAULT_PORT with HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key

2015-09-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9010:

Attachment: HDFS-9010.005.patch

As the Jenkins reported time out if we change both the hadoop-project and 
hadoop-hdfs-project, in this jira we focus on changes in the 
hadoop-hdfs-project only. The v5 patch reverts the changes in hadoop-project 
(will file another jira about this).

> Replace NameNode.DEFAULT_PORT with 
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT config key
> 
>
> Key: HDFS-9010
> URL: https://issues.apache.org/jira/browse/HDFS-9010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9010.000.patch, HDFS-9010.001.patch, 
> HDFS-9010.002.patch, HDFS-9010.003.patch, HDFS-9010.004.patch, 
> HDFS-9010.005.patch
>
>
> The {{NameNode.DEFAULT_PORT}} static attribute is stale as we use 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}} config value.
> This jira tracks the effort of replacing the  {{NameNode.DEFAULT_PORT}}  with 
> {{HdfsClientConfigKeys.DFS_NAMENODE_RPC_PORT_DEFAULT}}. Meanwhile, we mark 
> the {{NameNode.DEFAULT_PORT}} as _@Deprecated_ before removing it totally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8998) Small files storage supported inside HDFS

2015-09-14 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744196#comment-14744196
 ] 

Lei (Eddy) Xu commented on HDFS-8998:
-

Hi, [~zhangyongxyz] Thanks a lot for working on this.

I have a few more questions regarding your design.

* In the doc, you mentioned that the current HDFS small file design 
(SequentialFile or HAR) has the following problems: ??bad opening performance, 
file deletions and access control??. Could you give a more explicitly 
explanation about which problem(s) you are solving in this design?

If I understand correctly, this design is offload small file metadata from 
"index file" in SequentialFile/HAR to inodes in NN, so that it can keep files 
in blocks. Is it the case? Could you elaborate more about the potential 
performance benefits, and the suitable workloads for it?

* You also mentioned that SequentialFile/HAR are for read-only purpose. 

Is this design optimized for write? 

* bq. Small file INodes in small file zone has no structure changed.

Would INodes need to use an {{offset}} to track the start position in a block? 
The design doc suggests that all metadata are not stored on DN? 

One more question, should we keep track of deleted space in a block? Which 
server makes the decision of block re-written (compaction?). Would be nice to 
see some analysis of the time / space complexity of the design.

* bq. After background restructure or merge, small file will not support 
**truncate**

These background processes are transparent to the end users. Users might get 
confused because these files can be truncated at some time but not at the other 
times. If truncate is not a common operation, can we move the truncated file to 
a new file? Speaking of this, it'd be nice to see what is the typical size of 
small files in your cases.

Appreciate if you can also address these questions in the updated design. 



> Small files storage supported inside HDFS
> -
>
> Key: HDFS-8998
> URL: https://issues.apache.org/jira/browse/HDFS-8998
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Attachments: HDFS-8998.design.001.pdf
>
>
> HDFS has problems on store small files, just like this blog said 
> (http://blog.cloudera.com/blog/2009/02/the-small-files-problem).
> This blog also tell us some way how to store small file in HDFS, but they are 
> not good way, seems HAR files and Sequence Files are better for read-only 
> files.
> Current each HDFS block is only for one HDFS file, if too many small file 
> there, many small blocks will be in DataNode, which will make DataNode heavy 
> loading.
> This jira will show how to online merge small blocks to big one, and how to 
> delete small file, and so on.
> Cerrentlly we have many open jira for improving HDFS scalability on NameNode, 
> such as HDFS-7836, HDFS-8286 and so on. 
> So small file meta (INode and BlocksMap) will also be in NameNode.
> Design document will be uploaded soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8889) Erasure Coding: cover more test situations of datanode failure during client writing

2015-09-14 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744191#comment-14744191
 ] 

Zhe Zhang commented on HDFS-8889:
-

Thanks for the work Bo. It's a great idea to test the write pipeline error 
handling more systematically. I just moved this JIRA to follow-on together with 
other write pipeline JIRAs.

> Erasure Coding: cover more test situations of datanode failure during client 
> writing
> 
>
> Key: HDFS-8889
> URL: https://issues.apache.org/jira/browse/HDFS-8889
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8889-HDFS-7285-001.patch
>
>
> Currently 9 streamers are working together for the client writing. A small 
> number of failed datanodes (<= 3) for a block group should not influence the 
> writing. There’re a lot of datanode failure cases and we should cover as many 
> as possible in unit test.
> Suppose streamer 4 fails, the following situations for the next block group 
> should be considered:
> 1)all streamers succeed
> 2)Streamer 4 still fails
> 3)only streamer 1 fails
> 4)only streamer 8 fails (test parity streamer)
> 5)streamer 4 and 6 fail
> 6)streamer 4 and 1,6 fail
> 7)streamer 4 and 1,2,6 fail
> 8)streamer 2, 6 fail
> Suppose streamer 2 and 4 fail, the following situations for the next block 
> group should be considered:
> 1)only streamer 2 and 4 fail
> 2)streamer 2, 4, 8 fail
> 3)only streamer 2 fails
> 4)streamer 3 , 8 fail
> For a single streamer, we should consider the following situations of the 
> time of datanode failure:
> 1)before writing the first byte
> 2)before finishing writing the first cell
> 3)right after finishing writing the first cell
> 4)before writing the last byte of the block
> Other situations:
> 1)more than 3 streamers fail at the first block group
> 2)more than 3 streamers fail at the last block group
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8889) Erasure Coding: cover more test situations of datanode failure during client writing

2015-09-14 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8889:

Parent Issue: HDFS-8031  (was: HDFS-7285)

> Erasure Coding: cover more test situations of datanode failure during client 
> writing
> 
>
> Key: HDFS-8889
> URL: https://issues.apache.org/jira/browse/HDFS-8889
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8889-HDFS-7285-001.patch
>
>
> Currently 9 streamers are working together for the client writing. A small 
> number of failed datanodes (<= 3) for a block group should not influence the 
> writing. There’re a lot of datanode failure cases and we should cover as many 
> as possible in unit test.
> Suppose streamer 4 fails, the following situations for the next block group 
> should be considered:
> 1)all streamers succeed
> 2)Streamer 4 still fails
> 3)only streamer 1 fails
> 4)only streamer 8 fails (test parity streamer)
> 5)streamer 4 and 6 fail
> 6)streamer 4 and 1,6 fail
> 7)streamer 4 and 1,2,6 fail
> 8)streamer 2, 6 fail
> Suppose streamer 2 and 4 fail, the following situations for the next block 
> group should be considered:
> 1)only streamer 2 and 4 fail
> 2)streamer 2, 4, 8 fail
> 3)only streamer 2 fails
> 4)streamer 3 , 8 fail
> For a single streamer, we should consider the following situations of the 
> time of datanode failure:
> 1)before writing the first byte
> 2)before finishing writing the first cell
> 3)right after finishing writing the first cell
> 4)before writing the last byte of the block
> Other situations:
> 1)more than 3 streamers fail at the first block group
> 2)more than 3 streamers fail at the last block group
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-09-14 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744180#comment-14744180
 ] 

Zhe Zhang commented on HDFS-8632:
-

[~rakeshr] [~walter.k.su] After HDFS-8833 I think the semantics should be 
considered finalized now. I just posted an revised patch on HDFS-7351. Maybe we 
should resume work on this JIRA as well?

> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-HDFS-7285-00.patch, 
> HDFS-8632-HDFS-7285-01.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8873) throttle directoryScanner

2015-09-14 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-8873:
---
Attachment: HDFS-8873.002.patch

Cleaned up a couple checkstyle and whitespace issues

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch, HDFS-8873.002.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8550) Erasure Coding: Fix FindBugs Multithreaded correctness Warning

2015-09-14 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14744177#comment-14744177
 ] 

Zhe Zhang commented on HDFS-8550:
-

[~rakeshr] I wonder if you've had a chance to address this issue? Any help I 
can provide?

> Erasure Coding: Fix FindBugs Multithreaded correctness Warning
> --
>
> Key: HDFS-8550
> URL: https://issues.apache.org/jira/browse/HDFS-8550
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>
> Findbug warning:- Inconsistent synchronization of 
> org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time
> {code}
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.hdfs.DFSOutputStream
> Field org.apache.hadoop.hdfs.DFSOutputStream.streamer
> Synchronized 89% of the time
> Unsynchronized access at DFSOutputStream.java:[line 146]
> Unsynchronized access at DFSOutputStream.java:[line 859]
> Unsynchronized access at DFSOutputStream.java:[line 627]
> Unsynchronized access at DFSOutputStream.java:[line 630]
> Unsynchronized access at DFSOutputStream.java:[line 640]
> Unsynchronized access at DFSOutputStream.java:[line 342]
> Unsynchronized access at DFSOutputStream.java:[line 744]
> Unsynchronized access at DFSOutputStream.java:[line 903]
> Synchronized access at DFSOutputStream.java:[line 737]
> Synchronized access at DFSOutputStream.java:[line 913]
> Synchronized access at DFSOutputStream.java:[line 726]
> Synchronized access at DFSOutputStream.java:[line 756]
> Synchronized access at DFSOutputStream.java:[line 762]
> Synchronized access at DFSOutputStream.java:[line 757]
> Synchronized access at DFSOutputStream.java:[line 758]
> Synchronized access at DFSOutputStream.java:[line 762]
> Synchronized access at DFSOutputStream.java:[line 483]
> Synchronized access at DFSOutputStream.java:[line 486]
> Synchronized access at DFSOutputStream.java:[line 717]
> Synchronized access at DFSOutputStream.java:[line 719]
> Synchronized access at DFSOutputStream.java:[line 722]
> Synchronized access at DFSOutputStream.java:[line 408]
> Synchronized access at DFSOutputStream.java:[line 408]
> Synchronized access at DFSOutputStream.java:[line 423]
> Synchronized access at DFSOutputStream.java:[line 426]
> Synchronized access at DFSOutputStream.java:[line 411]
> Synchronized access at DFSOutputStream.java:[line 452]
> Synchronized access at DFSOutputStream.java:[line 452]
> Synchronized access at DFSOutputStream.java:[line 439]
> Synchronized access at DFSOutputStream.java:[line 439]
> Synchronized access at DFSOutputStream.java:[line 439]
> Synchronized access at DFSOutputStream.java:[line 670]
> Synchronized access at DFSOutputStream.java:[line 580]
> Synchronized access at DFSOutputStream.java:[line 574]
> Synchronized access at DFSOutputStream.java:[line 592]
> Synchronized access at DFSOutputStream.java:[line 583]
> Synchronized access at DFSOutputStream.java:[line 581]
> Synchronized access at DFSOutputStream.java:[line 621]
> Synchronized access at DFSOutputStream.java:[line 609]
> Synchronized access at DFSOutputStream.java:[line 621]
> Synchronized access at DFSOutputStream.java:[line 597]
> Synchronized access at DFSOutputStream.java:[line 612]
> Synchronized access at DFSOutputStream.java:[line 597]
> Synchronized access at DFSOutputStream.java:[line 588]
> Synchronized access at DFSOutputStream.java:[line 624]
> Synchronized access at DFSOutputStream.java:[line 612]
> Synchronized access at DFSOutputStream.java:[line 588]
> Synchronized access at DFSOutputStream.java:[line 632]
> Synchronized access at DFSOutputStream.java:[line 632]
> Synchronized access at DFSOutputStream.java:[line 616]
> Synchronized access at DFSOutputStream.java:[line 633]
> Synchronized access at DFSOutputStream.java:[line 657]
> Synchronized access at DFSOutputStream.java:[line 658]
> Synchronized access at DFSOutputStream.java:[line 695]
> Synchronized access at DFSOutputStream.java:[line 698]
> Synchronized access at DFSOutputStream.java:[line 784]
> Synchronized access at DFSOutputStream.java:[line 795]
> Synchronized access at DFSOutputStream.java:[line 801]
> Synchronized access at DFSOutputStream.java:[line 155]
> Synchronized access at DFSOutputStream.java:[line 158]
> Synchronized access at DFSOutputStream.java:[line 433]
> Synchronized access at DFSOutputStream.java:[line 886]
> Synchronized access at DFSOutputStream.java:[line 463]
> Synchronized access at DFSOutputStream.java:[line 469]
> Synchronized access at DFSOutputStream.java:[line 463]
> Synchronized access at DFSOutputStream.java:[line 470]
> Synchronized access at DFSOutputStream.java:[line 465]
> Synchronized access at DFSOutputStream.java:[line 749]
> Synchronized access at DFSStripedOutputStream.java:[line 260]
> Synchronized a

[jira] [Updated] (HDFS-8873) throttle directoryScanner

2015-09-14 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-8873:
---
Attachment: (was: HDFS-8873.002.patch)

> throttle directoryScanner
> -
>
> Key: HDFS-8873
> URL: https://issues.apache.org/jira/browse/HDFS-8873
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Daniel Templeton
> Attachments: HDFS-8873.001.patch
>
>
> The new 2-level directory layout can make directory scans expensive in terms 
> of disk seeks (see HDFS-8791) for details. 
> It would be good if the directoryScanner() had a configurable duty cycle that 
> would reduce its impact on disk performance (much like the approach in 
> HDFS-8617). 
> Without such a throttle, disks can go 100% busy for many minutes at a time 
> (assuming the common case of all inodes in cache but no directory blocks 
> cached, 64K seeks are required for full directory listing which translates to 
> 655 seconds) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >