[jira] [Commented] (HDFS-10430) Reuse FileSystem#access in TestAsyncDFS

2016-05-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303547#comment-15303547
 ] 

Chris Nauroth commented on HDFS-10430:
--

[~xiaobingo], this looks like a good change.  Thank you for the patch.  I have 
just 2 minor comments:
# The {{checkAccessPermissions}} helper method seems unnecessary now that it's 
just a pass-through to a single line of code.  Do you think it makes sense to 
move {{fs.access(path, mode);}} inline with {{testConcurrentAsyncAPI}} and 
remove the extra method?
# Please remove the unused imports reported by Checkstyle.

> Reuse FileSystem#access in TestAsyncDFS
> ---
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10430-HDFS-9924.000.patch, 
> HDFS-10430-HDFS-9924.001.patch
>
>
> In TestAsyncDFS, there are duplicate code to do access check. Here it tries 
> to reuse FileSystem#access for the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10236) Erasure Coding: Rename replication-based names in BlockManager to more generic [part-3]

2016-05-26 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303473#comment-15303473
 ] 

Rakesh R commented on HDFS-10236:
-

Thanks [~zhz] for committing the patch!

> Erasure Coding: Rename replication-based names in BlockManager to more 
> generic [part-3]
> ---
>
> Key: HDFS-10236
> URL: https://issues.apache.org/jira/browse/HDFS-10236
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: reviewed
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10236-00.patch, HDFS-10236-01.patch, 
> HDFS-10236-02.patch
>
>
> The idea of this jira is to rename the following entity in BlockManager as,
> {{getExpectedReplicaNum}} to {{getExpectedRedundancyNum}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8718) Block replicating cannot work after upgrading to 2.7

2016-05-26 Thread He Xiaoqiao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303438#comment-15303438
 ] 

He Xiaoqiao commented on HDFS-8718:
---

hi [~jianbginglover] and [~kanaka], ReplicationMonitor stuck for long time 
since *Global Lock*, and this caused block replicating could not work as 
expected. I create new issue 
[HDFS-10453|https://issues.apache.org/jira/browse/HDFS-10453] to describe this 
problem in detail and upload patch with solution.

> Block replicating cannot work after upgrading to 2.7 
> -
>
> Key: HDFS-8718
> URL: https://issues.apache.org/jira/browse/HDFS-8718
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Bing Jiang
>
> Decommission a datanode from hadoop, and hdfs can calculate the correct 
> number of  blocks to be replicated from web-ui. 
> {code}
> Decomissioning
> Node  Last contactUnder replicated blocks Blocks with no live replicas
> Under Replicated Blocks 
> In files under construction
> TS-BHTEST-03:50010 (172.22.49.3:50010)25641   0   0
> {code}
> From NN's log, the work of block replicating cannot be enforced due to 
> inconsistent expected storage type.
> {code}
> Node /default/rack_02/172.22.49.5:50010 [
>   Storage 
> [DISK]DS-3915533b-4ae4-4806-bf83caf1446f1e2f:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-3e54c331-3eaf-4447-b5e4-9bf91bc71b17:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-d44fa611-aa73-4415-a2de-7e73c9c5ea68:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-cebbf410-06a0-4171-a9bd-d0db55dad6d3:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-4c50b1c7-eaad-4858-b476-99dec17d68b5:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-f6cf9123-4125-4234-8e21-34b12170e576:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-7601b634-1761-45cc-9ffd-73ee8687c2a7:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-1d4b91ab-fe2f-4d5f-bd0a-57e9a0714654:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-cd2279cf-9c5a-4380-8c41-7681fa688eaf:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-630c734f-334a-466d-9649-4818d6e91181:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
>   Storage 
> [DISK]DS-31cd0d68-5f7c-4a0a-91e6-afa53c4df820:NORMAL:172.22.49.5:50010 is not 
> chosen since storage types do not match, where the required storage type is 
> ARCHIVE.
> ]
> 2015-07-07 16:00:22,032 WARN 
> org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough 
> replicas: expected size is 1 but onl
> y 0 storage types can be selected (replication=3, selected=[], 
> unavailable=[DISK, ARCHIVE], removed=[DISK], policy=BlockStoragePolicy{HOT:7,
>  storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
> 2015-07-07 16:00:22,032 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in n
> eed of 1 to reach 3 (unavailableStorages=[DISK, ARCHIVE], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[],
>  replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types 
> are unavailable:  unavailableStorages=[DISK, ARCHIVE], storageP
> olicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], 
> replicationFallbacks=[ARCHIVE]}
> {code}
> We have upgraded the hadoop cluster from 2.5 to 2.7.0 previously. I believe 
> the feature of ARCHIVE STORAGE has been enforced, but how about the block's 
> storage type after upgrading?
> The default BlockStoragePolicy is hot, and I guess those blocks do not have 
> the correct information bit of BlockStoragePolicy, so it cannot be handled 
> well.
> After I shutdown the datanode, the under-replicated blocks can be asked to 
> copy. So the workaround is to shutdown the datanode. 
> Could anyone take a look at the issue? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10430) Reuse FileSystem#access in TestAsyncDFS

2016-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303366#comment-15303366
 ] 

Hadoop QA commented on HDFS-10430:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 
0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 39s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806532/HDFS-10430-HDFS-9924.001.patch
 |
| JIRA Issue | HDFS-10430 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 38ffdd053ae1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8c84a2a |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15583/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15583/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15583/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15583/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15583/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HDFS-10430) Reuse FileSystem#access in TestAsyncDFS

2016-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303353#comment-15303353
 ] 

Hadoop QA commented on HDFS-10430:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 
0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m 59s 
{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 55s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806532/HDFS-10430-HDFS-9924.001.patch
 |
| JIRA Issue | HDFS-10430 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 973a4c5fa586 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8c84a2a |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15584/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15584/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15584/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Reuse FileSystem#access in TestAsyncDFS
> ---
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10430-HDFS-9924.000.patch, 
> 

[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-05-26 Thread Vinitha Reddy Gankidi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303287#comment-15303287
 ] 

Vinitha Reddy Gankidi commented on HDFS-10301:
--

 If we do have the check for stale storages before zombie storage removal, 
{{noStaleStorages}} in NameNodeRpcServer should be set to true when 
{{isStorageReport}} is true.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.01.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10430) Reuse FileSystem#access in TestAsyncDFS

2016-05-26 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10430:
-
Attachment: HDFS-10430-HDFS-9924.001.patch

v001 patch fixed some typos.

> Reuse FileSystem#access in TestAsyncDFS
> ---
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10430-HDFS-9924.000.patch, 
> HDFS-10430-HDFS-9924.001.patch
>
>
> In TestAsyncDFS, there are duplicate code to do access check. Here it tries 
> to reuse FileSystem#access for the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10430) Reuse FileSystem#access in TestAsyncDFS

2016-05-26 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10430:
-
Description: In TestAsyncDFS, there are duplicate code to do access check. 
Here it tries to reuse FileSystem#access for the same goal.  (was: In 
TestAsyncIPC, there are duplicate code to do access check. Here it tries to 
reuse FileSystem#access for the same goal.)

> Reuse FileSystem#access in TestAsyncDFS
> ---
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10430-HDFS-9924.000.patch
>
>
> In TestAsyncDFS, there are duplicate code to do access check. Here it tries 
> to reuse FileSystem#access for the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10430) Reuse FileSystem#access in TestAsyncDFS

2016-05-26 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10430:
-
Summary: Reuse FileSystem#access in TestAsyncDFS  (was: Reuse 
FileSystem#access in TestAsyncIPC)

> Reuse FileSystem#access in TestAsyncDFS
> ---
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10430-HDFS-9924.000.patch
>
>
> In TestAsyncIPC, there are duplicate code to do access check. Here it tries 
> to reuse FileSystem#access for the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10430) Reuse FileSystem#access in TestAsyncIPC

2016-05-26 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10430:
-
Status: Patch Available  (was: Open)

> Reuse FileSystem#access in TestAsyncIPC
> ---
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10430-HDFS-9924.000.patch
>
>
> In TestAsyncIPC, there are duplicate code to do access check. Here it tries 
> to reuse FileSystem#access for the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10430) Reuse FileSystem#access in TestAsyncIPC

2016-05-26 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10430:
-
Attachment: HDFS-10430-HDFS-9924.000.patch

> Reuse FileSystem#access in TestAsyncIPC
> ---
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10430-HDFS-9924.000.patch
>
>
> In TestAsyncIPC, there are duplicate code to do access check. Here it tries 
> to reuse FileSystem#access for the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10430) Reuse FileSystem#access in TestAsyncIPC

2016-05-26 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303279#comment-15303279
 ] 

Xiaobing Zhou commented on HDFS-10430:
--

I posted a simple patch v000 for review. Thanks.

> Reuse FileSystem#access in TestAsyncIPC
> ---
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10430-HDFS-9924.000.patch
>
>
> In TestAsyncIPC, there are duplicate code to do access check. Here it tries 
> to reuse FileSystem#access for the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10430) Reuse FileSystem#access in TestAsyncIPC

2016-05-26 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10430:
-
Description: In TestAsyncIPC, there are duplicate code to do access check. 
Here it tries to reuse FileSystem#access for the same goal.  (was: 
FileSystem#checkAccessPermissions could be used in a bunch of tests from 
different projects, but it's in hadoop-common, which is not visible in some 
cases.)

> Reuse FileSystem#access in TestAsyncIPC
> ---
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> In TestAsyncIPC, there are duplicate code to do access check. Here it tries 
> to reuse FileSystem#access for the same goal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10430) Reuse FileSystem#access in TestAsyncIPC

2016-05-26 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303273#comment-15303273
 ] 

Xiaobing Zhou commented on HDFS-10430:
--

Thank you [~cnauroth] and [~boky01] for the comments. As test code evolved, 
there is no need to do the refactoring originally proposed. Let's reuse 
FileSystem#access here. 

> Reuse FileSystem#access in TestAsyncIPC
> ---
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> FileSystem#checkAccessPermissions could be used in a bunch of tests from 
> different projects, but it's in hadoop-common, which is not visible in some 
> cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-05-26 Thread Vinitha Reddy Gankidi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303272#comment-15303272
 ] 

Vinitha Reddy Gankidi commented on HDFS-10301:
--

I looked into why the test {{TestAddOverReplicatedStripedBlocks}} fails with 
patch 004. I don't completely understand why the test relies on the fact that 
zombie storages should be removed when the DN has stale storages. Probably the 
test needs to be modified. Here are my findings:

With the patch, the test fails with the following error:
{code}
java.lang.AssertionError: expected:<10> but was:<11>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks.testProcessOverReplicatedAndMissingStripedBlock(TestAddOverReplicatedStripedBlocks.java:281)
{code}

In the test, {{DFSUtil.createStripedFile}} is invoked in the beginning.
{code}
 /**
   * Creates the metadata of a file in striped layout. This method only
   * manipulates the NameNode state without injecting data to DataNode.
   * You should disable periodical heartbeat before use this.
   *  @param file Path of the file to create
   * @param dir Parent path of the file
   * @param numBlocks Number of striped block groups to add to the file
   * @param numStripesPerBlk Number of striped cells in each block
   * @param toMkdir
   */
  public static void createStripedFile(MiniDFSCluster cluster, Path file, Path 
dir,
  int numBlocks, int numStripesPerBlk, boolean toMkdir) throws Exception {
{code}

This internally calls the {{DFSUtil.addBlockToFile}} method that mimics block 
reports. While processing these incremental storages, we update the datanode 
storages. In the test output, you can see the storages being added.
{code}
2016-05-26 17:10:03,330 [Thread-0] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(912)) - Adding new storage ID 
9505a2ad-78f4-45d7-9c13-2ecd92a06866 for DN 127.0.0.1:60835
2016-05-26 17:10:03,331 [Thread-0] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(912)) - Adding new storage ID 
d4bb2f70-4a1e-451f-9d47-a2967f819130 for DN 127.0.0.1:60839
2016-05-26 17:10:03,332 [Thread-0] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(912)) - Adding new storage ID 
841fc92f-fa15-4ced-8487-96ca4e6996d0 for DN 127.0.0.1:60844
2016-05-26 17:10:03,332 [Thread-0] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(912)) - Adding new storage ID 
304aaeeb-e2d0-4427-81c6-c79e4d0b6a4e for DN 127.0.0.1:60849
2016-05-26 17:10:03,332 [Thread-0] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(912)) - Adding new storage ID 
2d046d66-26fc-448f-938c-04dda2ecf34a for DN 127.0.0.1:60853
2016-05-26 17:10:03,333 [Thread-0] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(912)) - Adding new storage ID 
381d3151-e75e-434a-86f8-da5c83f22b19 for DN 127.0.0.1:60857
2016-05-26 17:10:03,333 [Thread-0] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(912)) - Adding new storage ID 
71f72bc9-9c66-478f-a0d7-3f0c7fc23964 for DN 127.0.0.1:60861
2016-05-26 17:10:03,333 [Thread-0] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(912)) - Adding new storage ID 
4dc539f3-b7a9-4145-a313-fa99ca1dd779 for DN 127.0.0.1:60865
2016-05-26 17:10:03,333 [Thread-0] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(912)) - Adding new storage ID 
734ea366-e635-4715-97d5-196bfcdccb18 for DN 127.0.0.1:60869
2016-05-26 17:10:03,334 [Thread-0] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(912)) - Adding new storage ID 
c639de06-e85c-4e93-92d2-506a49d4e41c for DN 127.0.0.1:60835
2016-05-26 17:10:03,343 [Thread-0] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(912)) - Adding new storage ID 
a82ff231-d630-4799-907d-f0a72ff06b38 for DN 127.0.0.1:60839
2016-05-26 17:10:03,343 [Thread-0] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(912)) - Adding new storage ID 
328c3467-0507-45fd-9aac-73a38165f741 for DN 127.0.0.1:60844
2016-05-26 17:10:03,343 [Thread-0] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(912)) - Adding new storage ID 
0b2a3b7f-e065-4e9a-9908-024091393738 for DN 127.0.0.1:60849
2016-05-26 17:10:03,344 [Thread-0] INFO  blockmanagement.DatanodeDescriptor 
(DatanodeDescriptor.java:updateStorage(912)) - Adding new storage ID 
3654a0ce-8389-40bf-b8d3-08cc49895a7d for DN 127.0.0.1:60853
2016-05-26 17:10:03,344 

[jira] [Updated] (HDFS-10430) Reuse FileSystem#access in TestAsyncIPC

2016-05-26 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10430:
-
Summary: Reuse FileSystem#access in TestAsyncIPC  (was: Refactor 
FileSystem#checkAccessPermissions for better reuse from tests)

> Reuse FileSystem#access in TestAsyncIPC
> ---
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> FileSystem#checkAccessPermissions could be used in a bunch of tests from 
> different projects, but it's in hadoop-common, which is not visible in some 
> cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10236) Erasure Coding: Rename replication-based names in BlockManager to more generic [part-3]

2016-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303232#comment-15303232
 ] 

Hudson commented on HDFS-10236:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #9871 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9871/])
HDFS-10236. Erasure Coding: Rename replication-based names in (zhz: rev 
8c84a2a93c22a93b4ff46dd917f6efb995675fbd)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java


> Erasure Coding: Rename replication-based names in BlockManager to more 
> generic [part-3]
> ---
>
> Key: HDFS-10236
> URL: https://issues.apache.org/jira/browse/HDFS-10236
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: reviewed
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10236-00.patch, HDFS-10236-01.patch, 
> HDFS-10236-02.patch
>
>
> The idea of this jira is to rename the following entity in BlockManager as,
> {{getExpectedReplicaNum}} to {{getExpectedRedundancyNum}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10415) TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2

2016-05-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10415:
-
Affects Version/s: (was: 2.9.0)
   2.8.0
 Target Version/s: 2.8.0

> TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2
> --
>
> Key: HDFS-10415
> URL: https://issues.apache.org/jira/browse/HDFS-10415
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
> Environment: jenkins
>Reporter: Sangjin Lee
>Assignee: Mingliang Liu
> Attachments: HDFS-10415-branch-2.000.patch, 
> HDFS-10415-branch-2.001.patch, HDFS-10415.000.patch
>
>
> {noformat}
> Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.096 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
> testDFSCloseOrdering(org.apache.hadoop.hdfs.TestDistributedFileSystem)  Time 
> elapsed: 0.045 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:790)
>   at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1417)
>   at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2084)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1187)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSCloseOrdering(TestDistributedFileSystem.java:217)
> {noformat}
> This is with Java 8 on Mac. It passes fine on trunk. I haven't tried other 
> combinations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10236) Erasure Coding: Rename replication-based names in BlockManager to more generic [part-3]

2016-05-26 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10236:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks Rakesh for the work.

> Erasure Coding: Rename replication-based names in BlockManager to more 
> generic [part-3]
> ---
>
> Key: HDFS-10236
> URL: https://issues.apache.org/jira/browse/HDFS-10236
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: reviewed
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10236-00.patch, HDFS-10236-01.patch, 
> HDFS-10236-02.patch
>
>
> The idea of this jira is to rename the following entity in BlockManager as,
> {{getExpectedReplicaNum}} to {{getExpectedRedundancyNum}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10236) Erasure Coding: Rename replication-based names in BlockManager to more generic [part-3]

2016-05-26 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10236:
-
Labels: reviewed  (was: )

> Erasure Coding: Rename replication-based names in BlockManager to more 
> generic [part-3]
> ---
>
> Key: HDFS-10236
> URL: https://issues.apache.org/jira/browse/HDFS-10236
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: reviewed
> Attachments: HDFS-10236-00.patch, HDFS-10236-01.patch, 
> HDFS-10236-02.patch
>
>
> The idea of this jira is to rename the following entity in BlockManager as,
> {{getExpectedReplicaNum}} to {{getExpectedRedundancyNum}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10236) Erasure Coding: Rename replication-based names in BlockManager to more generic [part-3]

2016-05-26 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10236:
-

Thanks Rakesh. +1 on the patch, I'm committing it soon.

> Erasure Coding: Rename replication-based names in BlockManager to more 
> generic [part-3]
> ---
>
> Key: HDFS-10236
> URL: https://issues.apache.org/jira/browse/HDFS-10236
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
>  Labels: reviewed
> Attachments: HDFS-10236-00.patch, HDFS-10236-01.patch, 
> HDFS-10236-02.patch
>
>
> The idea of this jira is to rename the following entity in BlockManager as,
> {{getExpectedReplicaNum}} to {{getExpectedRedundancyNum}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10466) DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation instead of BlockLocation

2016-05-26 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303161#comment-15303161
 ] 

Andrew Wang commented on HDFS-10466:


Thanks for the patch Juan, LGTM overall. There are a couple long lines found by 
checkstyle.

Do you want to add a unit test too to make sure you can dig the info you want 
out of this API?

> DistributedFileSystem.listLocatedStatus() should return HdfsBlockLocation 
> instead of BlockLocation
> --
>
> Key: HDFS-10466
> URL: https://issues.apache.org/jira/browse/HDFS-10466
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Juan Yu
>Assignee: Juan Yu
>Priority: Minor
> Attachments: HDFS-10466.001.patch, HDFS-10466.patch
>
>
> https://issues.apache.org/jira/browse/HDFS-202 added a new API 
> listLocatedStatus() to get all files' status with block locations for a 
> directory. This is great that we don't need to call 
> FileSystem.getFileBlockLocations() for each file. it's much faster (about 
> 8-10 times).
> However, the returned LocatedFileStatus only contains basic BlockLocation 
> instead of HdfsBlockLocation, the LocatedBlock details are stripped out.
> It should do the similar as DFSClient.getBlockLocations(), return 
> HdfsBlockLocation which provide full block location details.
> The implementation of DistributedFileSystem. listLocatedStatus() retrieves 
> HdfsLocatedFileStatus which contains all information, but when convert it to 
> LocatedFileStatus, it doesn't keep LocatedBlock data. It's a simple (and 
> compatible) change to make to keep the LocatedBlock details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-26 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Status: Open  (was: Patch Available)

Resubmit patch to kick Hadoop QA

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10425.01.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-26 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Status: Patch Available  (was: Open)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10425.01.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2016-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303085#comment-15303085
 ] 

Colin Patrick McCabe commented on HDFS-7240:


bq. Correct me if I am wrong – before Andrew Wang's contribution, symlink was 
somehow working (based on Eli Collins's work). After Andrew's work, we had no 
choice but disable the symlink feature. It this sense, symlink became even 
worse. Anyway, Andrew/Eli, any plan to fix symlink?

Symlinks were broken before Andrew started working on them.  They had serious 
security, performance, and usability issues.  If you are interested in learning 
more about the issues and helping to fix them, take a look at HADOOP-10019.  
They were disabled to avoid exposing people to serious security risks.  In the 
meantime, I will note that you were one of the reviewers on the JIRA that 
initially introduced symlinks, HDFS-245, before Andrew or I had even started 
working on Hadoop.

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup

2016-05-26 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-10463:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Just committed this to trunk and branch-2. 

Thanks [~templedf] for fixing the test, and [~atm] for the review. 



> TestRollingFileSystemSinkWithHdfs needs some cleanup
> 
>
> Key: HDFS-10463
> URL: https://issues.apache.org/jira/browse/HDFS-10463
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 2.9.0
>
> Attachments: HDFS-10463.001.patch, HDFS-10463.branch-2.001.patch
>
>
> There are three primary issues.  The most significant is that the 
> {{testFlushThread()}} method doesn't clean up after itself, which can cause 
> other tests to fail.  The other big issue is that the {{testSilentAppend()}} 
> method is testing the wrong thing.  An additional minor issue is that none of 
> the tests are careful about making sure the metrics system gets shutdown in 
> all cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup

2016-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302982#comment-15302982
 ] 

Hudson commented on HDFS-10463:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #9867 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9867/])
HDFS-10463. TestRollingFileSystemSinkWithHdfs needs some cleanup. (kasha: rev 
55c3e2de3d636482ef2c51bdf88e89a34fc58b32)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/metrics2/sink/TestRollingFileSystemSinkWithHdfs.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/sink/RollingFileSystemSinkTestBase.java


> TestRollingFileSystemSinkWithHdfs needs some cleanup
> 
>
> Key: HDFS-10463
> URL: https://issues.apache.org/jira/browse/HDFS-10463
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 2.9.0
>
> Attachments: HDFS-10463.001.patch, HDFS-10463.branch-2.001.patch
>
>
> There are three primary issues.  The most significant is that the 
> {{testFlushThread()}} method doesn't clean up after itself, which can cause 
> other tests to fail.  The other big issue is that the {{testSilentAppend()}} 
> method is testing the wrong thing.  An additional minor issue is that none of 
> the tests are careful about making sure the metrics system gets shutdown in 
> all cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup

2016-05-26 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302960#comment-15302960
 ] 

Karthik Kambatla commented on HDFS-10463:
-

+1. Checking this in. 

> TestRollingFileSystemSinkWithHdfs needs some cleanup
> 
>
> Key: HDFS-10463
> URL: https://issues.apache.org/jira/browse/HDFS-10463
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HDFS-10463.001.patch, HDFS-10463.branch-2.001.patch
>
>
> There are three primary issues.  The most significant is that the 
> {{testFlushThread()}} method doesn't clean up after itself, which can cause 
> other tests to fail.  The other big issue is that the {{testSilentAppend()}} 
> method is testing the wrong thing.  An additional minor issue is that none of 
> the tests are careful about making sure the metrics system gets shutdown in 
> all cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2016-05-26 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302890#comment-15302890
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7240:
---

> ... and added symlink support to FileSystem. The last one was just 
> contributing a new API to the FileSystem class, not implementing the symlink 
> feature itself. You are probably thinking of Eli Collins, who became a 
> committer partly by working on HDFS symlinks.

Thanks Colin for clarifying it.

Correct me if I am wrong -- before [~andrew.wang]'s contribution, symlink was 
somehow working (based on [~eli]'s work).  After Andrew's work, we had no 
choice but disable the symlink feature.  It this sense, symlink became even 
worse.  Anyway, Andrew/Eli, any plan to fix symlink?

Indeed, this JIRA is about object store.  We should not discuss symlink too 
much here.  My previous comment was just a suggestion to Andrew.  Let's discuss 
symlink in the dev mailing list or another JIRA.  Thanks.

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup

2016-05-26 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302822#comment-15302822
 ] 

Aaron T. Myers commented on HDFS-10463:
---

Good point! +1, looks good to me.

Thanks, Daniel.

> TestRollingFileSystemSinkWithHdfs needs some cleanup
> 
>
> Key: HDFS-10463
> URL: https://issues.apache.org/jira/browse/HDFS-10463
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HDFS-10463.001.patch, HDFS-10463.branch-2.001.patch
>
>
> There are three primary issues.  The most significant is that the 
> {{testFlushThread()}} method doesn't clean up after itself, which can cause 
> other tests to fail.  The other big issue is that the {{testSilentAppend()}} 
> method is testing the wrong thing.  An additional minor issue is that none of 
> the tests are careful about making sure the metrics system gets shutdown in 
> all cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10431) Refactor and speedup TestAsyncDFSRename

2016-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302782#comment-15302782
 ] 

Hudson commented on HDFS-10431:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #9866 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9866/])
HDFS-10431 Refactor and speedup TestAsyncDFSRename.  Contributed by (szetszwo: 
rev f4b9bcd87c66a39f0c93983431630e9d1b6e36d3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAsyncDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAsyncDFSRename.java


> Refactor and speedup TestAsyncDFSRename
> ---
>
> Key: HDFS-10431
> URL: https://issues.apache.org/jira/browse/HDFS-10431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10431-HDFS-9924.000.patch, 
> HDFS-10431-HDFS-9924.001.patch
>
>
> 1. Move irrelevant parts out of TestAsyncDFSRename
> 2. Limit of max async calls(i.e. ipc.client.async.calls.max) is set and 
> cached in ipc.Client. Client instances are cached based on SocketFactory. In 
> order to test different cases in various limits, every test (e.g. 
> TestAsyncDFSRename and TestAsyncDFS) creates separate instance of 
> MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not 
> efficient in that tests may take long time to bootstrap MiniDFSClusters. It's 
> even worse if cluster needs to restart in the middle. This proposes to do 
> refactoring to use shared instance of AsyncDistributedFileSystem for speedup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10445) Add timeout tests for async DFS API

2016-05-26 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-10445:
---
   Priority: Minor  (was: Major)
Component/s: (was: fs)
 test

> Add timeout tests for async DFS API
> ---
>
> Key: HDFS-10445
> URL: https://issues.apache.org/jira/browse/HDFS-10445
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
>
> As a result of HADOOP-13168 commit, async DFS APIs should also be tested in 
> the case of timeout (i.e. Future#get(int timeout, TimeUnit unit)).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10445) Add timeout tests for async DFS API

2016-05-26 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HDFS-10445.

Resolution: Duplicate

After HDFS-10431, all tests have timeout now.  Resolving as duplicate.

> Add timeout tests for async DFS API
> ---
>
> Key: HDFS-10445
> URL: https://issues.apache.org/jira/browse/HDFS-10445
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> As a result of HADOOP-13168 commit, async DFS APIs should also be tested in 
> the case of timeout (i.e. Future#get(int timeout, TimeUnit unit)).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10431) Refactor and speedup TestAsyncDFSRename

2016-05-26 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-10431:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Xiaobing!

> Refactor and speedup TestAsyncDFSRename
> ---
>
> Key: HDFS-10431
> URL: https://issues.apache.org/jira/browse/HDFS-10431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10431-HDFS-9924.000.patch, 
> HDFS-10431-HDFS-9924.001.patch
>
>
> 1. Move irrelevant parts out of TestAsyncDFSRename
> 2. Limit of max async calls(i.e. ipc.client.async.calls.max) is set and 
> cached in ipc.Client. Client instances are cached based on SocketFactory. In 
> order to test different cases in various limits, every test (e.g. 
> TestAsyncDFSRename and TestAsyncDFS) creates separate instance of 
> MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not 
> efficient in that tests may take long time to bootstrap MiniDFSClusters. It's 
> even worse if cluster needs to restart in the middle. This proposes to do 
> refactoring to use shared instance of AsyncDistributedFileSystem for speedup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10431) Refactor and speedup TestAsyncDFSRename

2016-05-26 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-10431:
---
Summary: Refactor and speedup TestAsyncDFSRename  (was: Refactor tests of 
Async DFS)

> Refactor and speedup TestAsyncDFSRename
> ---
>
> Key: HDFS-10431
> URL: https://issues.apache.org/jira/browse/HDFS-10431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-10431-HDFS-9924.000.patch, 
> HDFS-10431-HDFS-9924.001.patch
>
>
> 1. Move irrelevant parts out of TestAsyncDFSRename
> 2. Limit of max async calls(i.e. ipc.client.async.calls.max) is set and 
> cached in ipc.Client. Client instances are cached based on SocketFactory. In 
> order to test different cases in various limits, every test (e.g. 
> TestAsyncDFSRename and TestAsyncDFS) creates separate instance of 
> MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not 
> efficient in that tests may take long time to bootstrap MiniDFSClusters. It's 
> even worse if cluster needs to restart in the middle. This proposes to do 
> refactoring to use shared instance of AsyncDistributedFileSystem for speedup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10431) Refactor tests of Async DFS

2016-05-26 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-10431:
---
Hadoop Flags: Reviewed

+1 patch looks good.

> Refactor tests of Async DFS
> ---
>
> Key: HDFS-10431
> URL: https://issues.apache.org/jira/browse/HDFS-10431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-10431-HDFS-9924.000.patch, 
> HDFS-10431-HDFS-9924.001.patch
>
>
> 1. Move irrelevant parts out of TestAsyncDFSRename
> 2. Limit of max async calls(i.e. ipc.client.async.calls.max) is set and 
> cached in ipc.Client. Client instances are cached based on SocketFactory. In 
> order to test different cases in various limits, every test (e.g. 
> TestAsyncDFSRename and TestAsyncDFS) creates separate instance of 
> MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not 
> efficient in that tests may take long time to bootstrap MiniDFSClusters. It's 
> even worse if cluster needs to restart in the middle. This proposes to do 
> refactoring to use shared instance of AsyncDistributedFileSystem for speedup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10276) HDFS should not expose path info that user has no permission to see.

2016-05-26 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10276:
-
Description: 
Given you have a file {{/file}} an existence check for the path 
{{/file/whatever}} will give different responses for different implementations 
of FileSystem.

LocalFileSystem will return false while DistributedFileSystem will throw 
{{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
access=EXECUTE, ...}}

This above issue is fixed by HDFS-5802. However, HDFS-5802 may expose 
information about a path that user doesn't have permission to see. 

For example, if the user asks for /a/b/c, but does not have permission to list 
/a, we should not complain about /a/b


  was:
Given you have a file {{/file}} an existence check for the path 
{{/file/whatever}} will give different responses for different implementations 
of FileSystem.

LocalFileSystem will return false while DistributedFileSystem will throw 
{{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
access=EXECUTE, ...}}

This above issue is fixed by HDFS-5802. However, HDFS-5802 may expose 
information about a path that a user doesn't have permission to see. 

For example, if the user asks for /a/b/c, but does not have permission to list 
/a, we should not complain about /a/b



> HDFS should not expose path info that user has no permission to see.
> 
>
> Key: HDFS-10276
> URL: https://issues.apache.org/jira/browse/HDFS-10276
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kevin Cox
>Assignee: Yuanbo Liu
> Attachments: HDFS-10276.001.patch, HDFS-10276.002.patch, 
> HDFS-10276.003.patch, HDFS-10276.004.patch, HDFS-10276.005.patch, 
> HDFS-10276.006.patch
>
>
> Given you have a file {{/file}} an existence check for the path 
> {{/file/whatever}} will give different responses for different 
> implementations of FileSystem.
> LocalFileSystem will return false while DistributedFileSystem will throw 
> {{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
> access=EXECUTE, ...}}
> This above issue is fixed by HDFS-5802. However, HDFS-5802 may expose 
> information about a path that user doesn't have permission to see. 
> For example, if the user asks for /a/b/c, but does not have permission to 
> list /a, we should not complain about /a/b



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10276) HDFS should not expose path info that user has no permission to see.

2016-05-26 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10276:
-
Summary: HDFS should not expose path info that user has no permission to 
see.  (was: HDFS should not expose path info unaccessible to user when checking 
whether parent is a file)

> HDFS should not expose path info that user has no permission to see.
> 
>
> Key: HDFS-10276
> URL: https://issues.apache.org/jira/browse/HDFS-10276
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kevin Cox
>Assignee: Yuanbo Liu
> Attachments: HDFS-10276.001.patch, HDFS-10276.002.patch, 
> HDFS-10276.003.patch, HDFS-10276.004.patch, HDFS-10276.005.patch, 
> HDFS-10276.006.patch
>
>
> Given you have a file {{/file}} an existence check for the path 
> {{/file/whatever}} will give different responses for different 
> implementations of FileSystem.
> LocalFileSystem will return false while DistributedFileSystem will throw 
> {{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
> access=EXECUTE, ...}}
> This above issue is fixed by HDFS-5802. However, HDFS-5802 may expose 
> information about a path that a user doesn't have permission to see. 
> For example, if the user asks for /a/b/c, but does not have permission to 
> list /a, we should not complain about /a/b



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10276) HDFS should not expose path info unaccessible to user when checking whether parent is a file

2016-05-26 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10276:
-
Description: 
Given you have a file {{/file}} an existence check for the path 
{{/file/whatever}} will give different responses for different implementations 
of FileSystem.

LocalFileSystem will return false while DistributedFileSystem will throw 
{{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
access=EXECUTE, ...}}

This above issue is fixed by HDFS-5802. However, HDFS-5802 may expose 
information about a path that a user doesn't have permission to see. 

For example, if the user asks for /a/b/c, but does not have permission to list 
/a, we should not complain about /a/b


  was:
Given you have a file {{/file}} an existence check for the path 
{{/file/whatever}} will give different responses for different implementations 
of FileSystem.

LocalFileSystem will return false while DistributedFileSystem will throw 
{{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
access=EXECUTE, ...}}


> HDFS should not expose path info unaccessible to user when checking whether 
> parent is a file
> 
>
> Key: HDFS-10276
> URL: https://issues.apache.org/jira/browse/HDFS-10276
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kevin Cox
>Assignee: Yuanbo Liu
> Attachments: HDFS-10276.001.patch, HDFS-10276.002.patch, 
> HDFS-10276.003.patch, HDFS-10276.004.patch, HDFS-10276.005.patch, 
> HDFS-10276.006.patch
>
>
> Given you have a file {{/file}} an existence check for the path 
> {{/file/whatever}} will give different responses for different 
> implementations of FileSystem.
> LocalFileSystem will return false while DistributedFileSystem will throw 
> {{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
> access=EXECUTE, ...}}
> This above issue is fixed by HDFS-5802. However, HDFS-5802 may expose 
> information about a path that a user doesn't have permission to see. 
> For example, if the user asks for /a/b/c, but does not have permission to 
> list /a, we should not complain about /a/b



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2016-05-26 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302604#comment-15302604
 ] 

Jitendra Nath Pandey commented on HDFS-7240:


bq. Why an object store as part of HDFS?
It is one of the goals to have both hdfs and ozone being available in the same 
deployment. That means same datanodes serve both ozone and hdfs data. 
Therefore, having ozone as a separate subproject in hadoop is ok as long as 
they can share the storage layer. The datanode changes would still be needed in 
hdfs.
   There is another proposal in HDFS-10419, that moves HDFS data into storage 
containers. I think that effort will need a new datanode implementation, that 
shares storage container layer with ozone. 

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10276) HDFS should not expose path info unaccessible to user when checking whether parent is a file

2016-05-26 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10276:
-
Summary: HDFS should not expose path info unaccessible to user when 
checking whether parent is a file  (was: HDFS throws AccessControlException 
when checking for the existence of /a/b when /a is a file)

> HDFS should not expose path info unaccessible to user when checking whether 
> parent is a file
> 
>
> Key: HDFS-10276
> URL: https://issues.apache.org/jira/browse/HDFS-10276
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kevin Cox
>Assignee: Yuanbo Liu
> Attachments: HDFS-10276.001.patch, HDFS-10276.002.patch, 
> HDFS-10276.003.patch, HDFS-10276.004.patch, HDFS-10276.005.patch, 
> HDFS-10276.006.patch
>
>
> Given you have a file {{/file}} an existence check for the path 
> {{/file/whatever}} will give different responses for different 
> implementations of FileSystem.
> LocalFileSystem will return false while DistributedFileSystem will throw 
> {{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
> access=EXECUTE, ...}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10276) HDFS throws AccessControlException when checking for the existence of /a/b when /a is a file

2016-05-26 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302590#comment-15302590
 ] 

Yongjun Zhang commented on HDFS-10276:
--

Thanks [~yuanbo] for the new rev. I'm +1 on rev 6 and will commit by tomorrow, 
unless other folks have further comments.



> HDFS throws AccessControlException when checking for the existence of /a/b 
> when /a is a file
> 
>
> Key: HDFS-10276
> URL: https://issues.apache.org/jira/browse/HDFS-10276
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kevin Cox
>Assignee: Yuanbo Liu
> Attachments: HDFS-10276.001.patch, HDFS-10276.002.patch, 
> HDFS-10276.003.patch, HDFS-10276.004.patch, HDFS-10276.005.patch, 
> HDFS-10276.006.patch
>
>
> Given you have a file {{/file}} an existence check for the path 
> {{/file/whatever}} will give different responses for different 
> implementations of FileSystem.
> LocalFileSystem will return false while DistributedFileSystem will throw 
> {{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
> access=EXECUTE, ...}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10236) Erasure Coding: Rename replication-based names in BlockManager to more generic [part-3]

2016-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302563#comment-15302563
 ] 

Hadoop QA commented on HDFS-10236:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 
new + 428 unchanged - 1 fixed = 428 total (was 429) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 13s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 14s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806302/HDFS-10236-02.patch |
| JIRA Issue | HDFS-10236 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d285ea0d8a96 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / fed9bf0 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15581/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15581/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15581/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15581/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Erasure Coding: Rename replication-based names in 

[jira] [Updated] (HDFS-9547) DiskBalancer : Add user documentation

2016-05-26 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-9547:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~eddyxu] Thanks for the review. I can have cleaned up the white spaces issues 
while committing this to the feature branch.

> DiskBalancer : Add user documentation
> -
>
> Key: HDFS-9547
> URL: https://issues.apache.org/jira/browse/HDFS-9547
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-9547-HDFS-1312.001.patch, 
> HDFS-9547-HDFS-1312.002.patch
>
>
> Write diskbalancer.md since this is a new tool and explain the usage with 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-05-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302426#comment-15302426
 ] 

Colin Patrick McCabe commented on HDFS-10301:
-

I never said that patch 004 introduced incompatible changes.  I just argued 
that it was a bigger change than was necessary to fix the problem.  All other 
things being equal, we would prefer a smaller change to a bigger one.  The only 
argument you have given against my change is that it doesn't fix the problem in 
the case where full block reports are interleaved.  But this is an extremely, 
extremely rare case, to the point where nobody else has even seen this problem 
in their cluster.

I still think that patch 005 is an easier way to fix the problem.  It's 
basically a simple bugfix to my original patch.  However, if you want to do 
something more complex, I will review it.  But I don't want to add any 
additional RPCs.  We already have problems with NameNode performance and we 
should not be adding more RPCs when it's not needed.  We can include the 
storage information in the first RPC of the block report as an optional field.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.01.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10431) Refactor tests of Async DFS

2016-05-26 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302365#comment-15302365
 ] 

Xiaobing Zhou commented on HDFS-10431:
--

The test failure is not related to this v001 patch. It passed locally in trunk.

> Refactor tests of Async DFS
> ---
>
> Key: HDFS-10431
> URL: https://issues.apache.org/jira/browse/HDFS-10431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-10431-HDFS-9924.000.patch, 
> HDFS-10431-HDFS-9924.001.patch
>
>
> 1. Move irrelevant parts out of TestAsyncDFSRename
> 2. Limit of max async calls(i.e. ipc.client.async.calls.max) is set and 
> cached in ipc.Client. Client instances are cached based on SocketFactory. In 
> order to test different cases in various limits, every test (e.g. 
> TestAsyncDFSRename and TestAsyncDFS) creates separate instance of 
> MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not 
> efficient in that tests may take long time to bootstrap MiniDFSClusters. It's 
> even worse if cluster needs to restart in the middle. This proposes to do 
> refactoring to use shared instance of AsyncDistributedFileSystem for speedup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2016-05-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302344#comment-15302344
 ] 

stack commented on HDFS-7240:
-

bq. It is unfair to say that you are being rebuffed.

Can we please move to discussion of the design. Back and forth on what is 
'fair', 'tone', and how folks got commit bits is corrosive and derails what is 
important here; i.e. landing this big one.

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2016-05-26 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302329#comment-15302329
 ] 

Arpit Agarwal commented on HDFS-7240:
-

bq. So far though it feels like I'm being rebuffed.
As you pointed out, your and Colin's feedback from our last discussion has 
influenced the design (and Anu rightly credited you for that during the 
ApacheCon talk too). Also I recall Anu spending over an hour with you in person 
at ApacheCon to go over your comments. It is unfair to say that you are being 
rebuffed. I again request you avoid such remarks and share your technical 
feedback/ideas with us to help identify gaps in our thinking. We'd be happy to 
schedule a webex. Many of us working on Ozone are remote but perhaps we can get 
together at the Hadoop Summit in June.

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10366) libhdfs++: Add SASL authentication

2016-05-26 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10366:
--
Status: Patch Available  (was: Open)

> libhdfs++: Add SASL authentication
> --
>
> Key: HDFS-10366
> URL: https://issues.apache.org/jira/browse/HDFS-10366
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10366.HDFS-8707.000.patch, 
> HDFS-10366.HDFS-8707.001.patch
>
>
> Enable communication with HDFS clusters that have KERBEROS authentication 
> enabled; use tokens from NN when communicating with DN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10366) libhdfs++: Add SASL authentication

2016-05-26 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10366:
--
Attachment: HDFS-10366.HDFS-8707.001.patch

New patch:
bq. Initialize authentication with kDefaultAuthentication?
Done

bq. I worry that the explicit assignment of kAuthentication failed could 
possibly collide with erd::errc values on some platforms
Moved all internal error codes to be > 255.

bq. We also now have two base64 encode functions
Removed the gsasl-specific one

> libhdfs++: Add SASL authentication
> --
>
> Key: HDFS-10366
> URL: https://issues.apache.org/jira/browse/HDFS-10366
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10366.HDFS-8707.000.patch, 
> HDFS-10366.HDFS-8707.001.patch
>
>
> Enable communication with HDFS clusters that have KERBEROS authentication 
> enabled; use tokens from NN when communicating with DN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6937) Another issue in handling checksum errors in write pipeline

2016-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302230#comment-15302230
 ] 

Hadoop QA commented on HDFS-6937:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 30s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 33 new + 
529 unchanged - 4 fixed = 562 total (was 533) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 27 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 9s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 30s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestPipelineRecovery |
|   | hadoop.hdfs.TestBlocksScheduledCounter |
|   | hadoop.hdfs.TestDatanodeDeath |
|   | hadoop.hdfs.tools.TestDebugAdmin |
|   | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.hdfs.TestDFSClientExcludedNodes |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestAbandonBlock |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806370/HDFS-6937.002.patch |
| JIRA Issue | HDFS-6937 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fa1b66a08b96 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 77202fa |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15578/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15578/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 

[jira] [Commented] (HDFS-7240) Object store in HDFS

2016-05-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302191#comment-15302191
 ] 

stack commented on HDFS-7240:
-

bq. Now, can people stop being territorial or making any form of criticism of 
each other. It is fundamentally against the ASF philosophy of collaborative, 
community development, doesn't help long term collaboration and makes the 
entire project look bad. Thanks.

Amen.

Thanks for posting design [~anu]

bq. Datanodes provide a shared generic storage service called the container 
layer .

Is this HDFS Datanode? We'd add block manager functionality to the Datanode? 
(Did we answer the [~zhz] question, "How about "why an object store as part of 
HDFS"?)

Thanks


> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10449) TestRollingFileSystemSinkWithHdfs#testFailedClose() fails on branch-2

2016-05-26 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302169#comment-15302169
 ] 

Daniel Templeton commented on HDFS-10449:
-

I accidentally debugged this issue.  You need to replace:

{code}
  fail("No exception was generated while stopping sink "
  + "even though HDFS was unavailable");
{code}

with

{code}
  assertTrue("No exception was generated while stopping sink "
  + "even though HDFS was unavailable", MockSink.errored);
{code}

> TestRollingFileSystemSinkWithHdfs#testFailedClose() fails on branch-2
> -
>
> Key: HDFS-10449
> URL: https://issues.apache.org/jira/browse/HDFS-10449
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
> Environment: jenkins
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>
> {noformat}
> Running org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.263 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
> testFailedClose(org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs)
>   Time elapsed: 8.729 sec  <<< FAILURE!
> java.lang.AssertionError: No exception was generated while stopping sink even 
> though HDFS was unavailable
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs.testFailedClose(TestRollingFileSystemSinkWithHdfs.java:187)
> {noformat}
> This passes fine on trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10463) TestRollingFileSystemSinkWithHdfs needs some cleanup

2016-05-26 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302165#comment-15302165
 ] 

Daniel Templeton commented on HDFS-10463:
-

That issue is preexisting: HDFS-10449.

> TestRollingFileSystemSinkWithHdfs needs some cleanup
> 
>
> Key: HDFS-10463
> URL: https://issues.apache.org/jira/browse/HDFS-10463
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HDFS-10463.001.patch, HDFS-10463.branch-2.001.patch
>
>
> There are three primary issues.  The most significant is that the 
> {{testFlushThread()}} method doesn't clean up after itself, which can cause 
> other tests to fail.  The other big issue is that the {{testSilentAppend()}} 
> method is testing the wrong thing.  An additional minor issue is that none of 
> the tests are careful about making sure the metrics system gets shutdown in 
> all cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6937) Another issue in handling checksum errors in write pipeline

2016-05-26 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-6937:

Attachment: HDFS-6937.002.patch

rev 002 for a better way to get the DNs in a pipeline.


> Another issue in handling checksum errors in write pipeline
> ---
>
> Key: HDFS-6937
> URL: https://issues.apache.org/jira/browse/HDFS-6937
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs-client
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-6937.001.patch, HDFS-6937.002.patch
>
>
> Given a write pipeline:
> DN1 -> DN2 -> DN3
> DN3 detected cheksum error and terminate, DN2 truncates its replica to the 
> ACKed size. Then a new pipeline is attempted as
> DN1 -> DN2 -> DN4
> DN4 detects checksum error again. Later when replaced DN4 with DN5 (and so 
> on), it failed for the same reason. This led to the observation that DN2's 
> data is corrupted. 
> Found that the software currently truncates DN2's replca to the ACKed size 
> after DN3 terminates. But it doesn't check the correctness of the data 
> already written to disk.
> So intuitively, a solution would be, when downstream DN (DN3 here) found 
> checksum error, propagate this info back to upstream DN (DN2 here), DN2 
> checks the correctness of the data already written to disk, and truncate the 
> replica to to MIN(correctDataSize, ACKedSize).
> Found this issue is similar to what was reported by HDFS-3875, and the 
> truncation at DN2 was actually introduced as part of the HDFS-3875 solution. 
> Filing this jira for the issue reported here. HDFS-3875 was filed by 
> [~tlipcon]
> and found he proposed something similar there.
> {quote}
> if the tail node in the pipeline detects a checksum error, then it returns a 
> special error code back up the pipeline indicating this (rather than just 
> disconnecting)
> if a non-tail node receives this error code, then it immediately scans its 
> own block on disk (from the beginning up through the last acked length). If 
> it detects a corruption on its local copy, then it should assume that it is 
> the faulty one, rather than the downstream neighbor. If it detects no 
> corruption, then the faulty node is either the downstream mirror or the 
> network link between the two, and the current behavior is reasonable.
> {quote}
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10366) libhdfs++: Add SASL authentication

2016-05-26 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302077#comment-15302077
 ] 

James Clampffer commented on HDFS-10366:


Looks good to me other than a trailing whitespace error, +1 pending a passing 
CI run.

Minor nits in case you want to fix them now, I'm guessing a lot of this code 
will get a second pass in HDFS-10450 so they can also be pushed there if they 
look like they are worth doing.

{code}
Options::Options() : rpc_timeout(kDefaultRpcTimeout), 
max_rpc_retries(kDefaultMaxRpcRetries),
 rpc_retry_delay_ms(kDefaultRpcRetryDelayMs),
 host_exclusion_duration(kDefaultHostExclusionDuration),
 defaultFS()
 defaultFS(),
 authentication(kSimple)
{code}
Initialize authentication with kDefaultAuthentication?

{code}
  enum Code {
kOk = 0,
kInvalidArgument = static_cast(std::errc::invalid_argument),
kResourceUnavailable = 
static_cast(std::errc::resource_unavailable_try_again),
kUnimplemented = static_cast(std::errc::function_not_supported),
kOperationCanceled = static_cast(std::errc::operation_canceled),
kPermissionDenied = static_cast(std::errc::permission_denied),
kAuthenticationFailed = 254,
{code}
I worry that the explicit assignment of kAuthentication failed could possibly 
collide with erd::errc values on some platforms.  On the other hand implicit 
assignment might make it take on the value of an error code we aren't using at 
the moment and lead to confusion.  I'm not sure there's a good portable way to 
prevent this, maybe cmake can generate and run a little program to check?

We also now have two base64 encode functions; not really a big deal bit having 
a single one would be nicer.

> libhdfs++: Add SASL authentication
> --
>
> Key: HDFS-10366
> URL: https://issues.apache.org/jira/browse/HDFS-10366
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10366.HDFS-8707.000.patch
>
>
> Enable communication with HDFS clusters that have KERBEROS authentication 
> enabled; use tokens from NN when communicating with DN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6937) Another issue in handling checksum errors in write pipeline

2016-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301992#comment-15301992
 ] 

Hadoop QA commented on HDFS-6937:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 33 new + 
529 unchanged - 4 fixed = 562 total (was 533) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 25 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 9s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
36s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 56s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDatanodeDeath |
|   | hadoop.hdfs.server.datanode.TestDiskError |
|   | hadoop.hdfs.TestDFSClientExcludedNodes |
|   | hadoop.hdfs.TestPipelineRecovery |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDecommissionWithStriped |
|   | hadoop.hdfs.TestAsyncDFSRename |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.tools.TestDebugAdmin |
|   | hadoop.hdfs.TestFileAppend |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806312/HDFS-6937.001.patch |
| JIRA Issue | HDFS-6937 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cd955b145953 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 77202fa |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15576/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15576/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 

[jira] [Commented] (HDFS-7240) Object store in HDFS

2016-05-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301781#comment-15301781
 ] 

Steve Loughran commented on HDFS-7240:
--

bq. For example, in (all of) Hadoop's s3 filesystem implementations, listStatus 
uses this quick listing of keys between A and B. When someone does "listStatus 
/a/b/c", we can ask s3 for all the keys between /a/b/c/ and /a/b/c0 (0 is the 
ASCII value right after slash). Of course, s3 does not really have directories, 
but we can treat the keys in this range as being in the directory /a/b/c for 
the purposes of s3a or s3n. If we just had hash partitioning, this kind of 
operation would be O(N^2) where N is the number of keys. It would just be 
infeasible for any large bucket.

FWIW I'm looking at bulk-recursive directory listing in s3a for listStatus, 
moving the cost of listing from a very slow O(all-directories) to an 
O(all-files/1000). Be nice to retain that as otherwise dir listing is a very 
expensive operation, which kills split calculation.

Now, can people stop being territorial or making any form of criticism of each 
other. It is fundamentally against the ASF philosophy of collaborative, 
community development, doesn't help long term collaboration and makes the 
entire project look bad. Thanks.

> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, 
> ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6937) Another issue in handling checksum errors in write pipeline

2016-05-26 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301733#comment-15301733
 ] 

Yongjun Zhang commented on HDFS-6937:
-

Attached a draft patch that reports

{code}
2016-05-25 11:35:20,639 [DataStreamer for file 
/tmp/testClientReportBadBlock/CorruptTwoOutOfThreeReplicas1 block 
BP-198184347-127.0.0.1-1464201303610:blk_1073741825_1001] WARN  
hdfs.DataStreamer (DataStreamer.java:handleBadDatanode(1400)) - Error Recovery 
for BP-198184347-127.0.0.1-1464201303610:blk_1073741825_1001 in pipeline 
[DatanodeInfoWithStorage[127.0.0.1:54392,DS-d6b01513-ac11-4fdf-99a1-fbb111d0f0c5,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:4,DS-67174cd1-1f9c-46bc-9dea-8ece7190308d,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:55877,DS-8fe30be4-1244-4059-a567-bc156d49d01a,DISK]]:
 datanode 
0(DatanodeInfoWithStorage[127.0.0.1:54392,DS-d6b01513-ac11-4fdf-99a1-fbb111d0f0c5,DISK])
 is bad.
{code}
where 127.0.0.1:5439 is DN3 the reported example.

With the fix, we can see

{code}
2016-05-25 14:15:29,831 [DataXceiver for client 
DFSClient_NONMAPREDUCE_1623233781_1 at /127.0.0.1:59590 [Receiving block 
BP-1085730607-127.0.0.1-1464210923866:blk_1073741825_1001]] WARN  
datanode.DataNode (DataXceiver.java:determineFirstBadLink(494)) - Datanode 2 
got response for connect ack  from downstream datanode with firstbadlink as 
127.0.0.1:36300, however, the replica on the current Datanode host4:38574 is 
found to be corrupted, set the firstBadLink to this DataNode.
{code}
where host4:38574 is DN2 in the example. Thus it's reported bad in Error 
Recovery message:

{code}
2016-05-25 14:15:29,833 [DataStreamer for file 
/tmp/testClientReportBadBlock/CorruptTwoOutOfThreeReplicas1 block 
BP-1085730607-127.0.0.1-1464210923866:blk_1073741825_1001] WARN  
hdfs.DataStreamer (DataStreamer.java:handleBadDatanode(1400)) - Error Recovery 
for BP-1085730607-127.0.0.1-1464210923866:blk_1073741825_1001 in pipeline 
[DatanodeInfoWithStorage[127.0.0.1:38574,DS-a743f66a-3379-4a1e-82df-5f6f26815df8,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:38267,DS-32e47236-c29f-435b-995b-f6f4f2a86acc,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:36300,DS-906d823d-0de5-40f1-9409-4c8c6d4edd08,DISK]]:
 datanode 
0(DatanodeInfoWithStorage[127.0.0.1:38574,DS-a743f66a-3379-4a1e-82df-5f6f26815df8,DISK])
 is bad.
{code}

I was able to see the fix constantly succeed in one debug version, and fail 
when removing the fix. However, before I post the patch now, I observed some 
intermittency in the unit test env, which is yet to understood. But I'd like to 
post the patch to let it rolling.

Hi [~cmccabe],

Would you please help taking a look? 

Thanks a lot.




> Another issue in handling checksum errors in write pipeline
> ---
>
> Key: HDFS-6937
> URL: https://issues.apache.org/jira/browse/HDFS-6937
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs-client
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-6937.001.patch
>
>
> Given a write pipeline:
> DN1 -> DN2 -> DN3
> DN3 detected cheksum error and terminate, DN2 truncates its replica to the 
> ACKed size. Then a new pipeline is attempted as
> DN1 -> DN2 -> DN4
> DN4 detects checksum error again. Later when replaced DN4 with DN5 (and so 
> on), it failed for the same reason. This led to the observation that DN2's 
> data is corrupted. 
> Found that the software currently truncates DN2's replca to the ACKed size 
> after DN3 terminates. But it doesn't check the correctness of the data 
> already written to disk.
> So intuitively, a solution would be, when downstream DN (DN3 here) found 
> checksum error, propagate this info back to upstream DN (DN2 here), DN2 
> checks the correctness of the data already written to disk, and truncate the 
> replica to to MIN(correctDataSize, ACKedSize).
> Found this issue is similar to what was reported by HDFS-3875, and the 
> truncation at DN2 was actually introduced as part of the HDFS-3875 solution. 
> Filing this jira for the issue reported here. HDFS-3875 was filed by 
> [~tlipcon]
> and found he proposed something similar there.
> {quote}
> if the tail node in the pipeline detects a checksum error, then it returns a 
> special error code back up the pipeline indicating this (rather than just 
> disconnecting)
> if a non-tail node receives this error code, then it immediately scans its 
> own block on disk (from the beginning up through the last acked length). If 
> it detects a corruption on its local copy, then it should assume that it is 
> the faulty one, rather than the downstream neighbor. If it detects no 
> corruption, then the faulty node is either the downstream mirror or the 
> network link between the two, and the current behavior is reasonable.
> {quote}
> 

[jira] [Updated] (HDFS-6937) Another issue in handling checksum errors in write pipeline

2016-05-26 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-6937:

Attachment: HDFS-6937.001.patch

> Another issue in handling checksum errors in write pipeline
> ---
>
> Key: HDFS-6937
> URL: https://issues.apache.org/jira/browse/HDFS-6937
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs-client
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-6937.001.patch
>
>
> Given a write pipeline:
> DN1 -> DN2 -> DN3
> DN3 detected cheksum error and terminate, DN2 truncates its replica to the 
> ACKed size. Then a new pipeline is attempted as
> DN1 -> DN2 -> DN4
> DN4 detects checksum error again. Later when replaced DN4 with DN5 (and so 
> on), it failed for the same reason. This led to the observation that DN2's 
> data is corrupted. 
> Found that the software currently truncates DN2's replca to the ACKed size 
> after DN3 terminates. But it doesn't check the correctness of the data 
> already written to disk.
> So intuitively, a solution would be, when downstream DN (DN3 here) found 
> checksum error, propagate this info back to upstream DN (DN2 here), DN2 
> checks the correctness of the data already written to disk, and truncate the 
> replica to to MIN(correctDataSize, ACKedSize).
> Found this issue is similar to what was reported by HDFS-3875, and the 
> truncation at DN2 was actually introduced as part of the HDFS-3875 solution. 
> Filing this jira for the issue reported here. HDFS-3875 was filed by 
> [~tlipcon]
> and found he proposed something similar there.
> {quote}
> if the tail node in the pipeline detects a checksum error, then it returns a 
> special error code back up the pipeline indicating this (rather than just 
> disconnecting)
> if a non-tail node receives this error code, then it immediately scans its 
> own block on disk (from the beginning up through the last acked length). If 
> it detects a corruption on its local copy, then it should assume that it is 
> the faulty one, rather than the downstream neighbor. If it detects no 
> corruption, then the faulty node is either the downstream mirror or the 
> network link between the two, and the current behavior is reasonable.
> {quote}
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6937) Another issue in handling checksum errors in write pipeline

2016-05-26 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-6937:

Status: Patch Available  (was: Open)

> Another issue in handling checksum errors in write pipeline
> ---
>
> Key: HDFS-6937
> URL: https://issues.apache.org/jira/browse/HDFS-6937
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs-client
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-6937.001.patch
>
>
> Given a write pipeline:
> DN1 -> DN2 -> DN3
> DN3 detected cheksum error and terminate, DN2 truncates its replica to the 
> ACKed size. Then a new pipeline is attempted as
> DN1 -> DN2 -> DN4
> DN4 detects checksum error again. Later when replaced DN4 with DN5 (and so 
> on), it failed for the same reason. This led to the observation that DN2's 
> data is corrupted. 
> Found that the software currently truncates DN2's replca to the ACKed size 
> after DN3 terminates. But it doesn't check the correctness of the data 
> already written to disk.
> So intuitively, a solution would be, when downstream DN (DN3 here) found 
> checksum error, propagate this info back to upstream DN (DN2 here), DN2 
> checks the correctness of the data already written to disk, and truncate the 
> replica to to MIN(correctDataSize, ACKedSize).
> Found this issue is similar to what was reported by HDFS-3875, and the 
> truncation at DN2 was actually introduced as part of the HDFS-3875 solution. 
> Filing this jira for the issue reported here. HDFS-3875 was filed by 
> [~tlipcon]
> and found he proposed something similar there.
> {quote}
> if the tail node in the pipeline detects a checksum error, then it returns a 
> special error code back up the pipeline indicating this (rather than just 
> disconnecting)
> if a non-tail node receives this error code, then it immediately scans its 
> own block on disk (from the beginning up through the last acked length). If 
> it detects a corruption on its local copy, then it should assume that it is 
> the faulty one, rather than the downstream neighbor. If it detects no 
> corruption, then the faulty node is either the downstream mirror or the 
> network link between the two, and the current behavior is reasonable.
> {quote}
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-6937) Another issue in handling checksum errors in write pipeline

2016-05-26 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang reassigned HDFS-6937:
---

Assignee: Yongjun Zhang  (was: Colin Patrick McCabe)

> Another issue in handling checksum errors in write pipeline
> ---
>
> Key: HDFS-6937
> URL: https://issues.apache.org/jira/browse/HDFS-6937
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs-client
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>
> Given a write pipeline:
> DN1 -> DN2 -> DN3
> DN3 detected cheksum error and terminate, DN2 truncates its replica to the 
> ACKed size. Then a new pipeline is attempted as
> DN1 -> DN2 -> DN4
> DN4 detects checksum error again. Later when replaced DN4 with DN5 (and so 
> on), it failed for the same reason. This led to the observation that DN2's 
> data is corrupted. 
> Found that the software currently truncates DN2's replca to the ACKed size 
> after DN3 terminates. But it doesn't check the correctness of the data 
> already written to disk.
> So intuitively, a solution would be, when downstream DN (DN3 here) found 
> checksum error, propagate this info back to upstream DN (DN2 here), DN2 
> checks the correctness of the data already written to disk, and truncate the 
> replica to to MIN(correctDataSize, ACKedSize).
> Found this issue is similar to what was reported by HDFS-3875, and the 
> truncation at DN2 was actually introduced as part of the HDFS-3875 solution. 
> Filing this jira for the issue reported here. HDFS-3875 was filed by 
> [~tlipcon]
> and found he proposed something similar there.
> {quote}
> if the tail node in the pipeline detects a checksum error, then it returns a 
> special error code back up the pipeline indicating this (rather than just 
> disconnecting)
> if a non-tail node receives this error code, then it immediately scans its 
> own block on disk (from the beginning up through the last acked length). If 
> it detects a corruption on its local copy, then it should assume that it is 
> the faulty one, rather than the downstream neighbor. If it detects no 
> corruption, then the faulty node is either the downstream mirror or the 
> network link between the two, and the current behavior is reasonable.
> {quote}
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6937) Another issue in handling checksum errors in write pipeline

2016-05-26 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301723#comment-15301723
 ] 

Yongjun Zhang commented on HDFS-6937:
-

I'm proposing a simplified solution here:

Instead of 
{quote}
So intuitively, a solution would be, when downstream DN (DN3 here) found 
checksum error, propagate this info back to upstream DN (DN2 here), DN2 checks 
the correctness of the data already written to disk, and truncate the replica 
to to MIN(correctDataSize, ACKedSize).
{quote}

what we can do is, when DN2 find DN3 failed, DN2 simply scan its own replica to 
check possible corruption, if so, DN2 reports itself as the firstBadLink. 

Thanks Colin for earlier discussion.


> Another issue in handling checksum errors in write pipeline
> ---
>
> Key: HDFS-6937
> URL: https://issues.apache.org/jira/browse/HDFS-6937
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs-client
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Colin Patrick McCabe
>
> Given a write pipeline:
> DN1 -> DN2 -> DN3
> DN3 detected cheksum error and terminate, DN2 truncates its replica to the 
> ACKed size. Then a new pipeline is attempted as
> DN1 -> DN2 -> DN4
> DN4 detects checksum error again. Later when replaced DN4 with DN5 (and so 
> on), it failed for the same reason. This led to the observation that DN2's 
> data is corrupted. 
> Found that the software currently truncates DN2's replca to the ACKed size 
> after DN3 terminates. But it doesn't check the correctness of the data 
> already written to disk.
> So intuitively, a solution would be, when downstream DN (DN3 here) found 
> checksum error, propagate this info back to upstream DN (DN2 here), DN2 
> checks the correctness of the data already written to disk, and truncate the 
> replica to to MIN(correctDataSize, ACKedSize).
> Found this issue is similar to what was reported by HDFS-3875, and the 
> truncation at DN2 was actually introduced as part of the HDFS-3875 solution. 
> Filing this jira for the issue reported here. HDFS-3875 was filed by 
> [~tlipcon]
> and found he proposed something similar there.
> {quote}
> if the tail node in the pipeline detects a checksum error, then it returns a 
> special error code back up the pipeline indicating this (rather than just 
> disconnecting)
> if a non-tail node receives this error code, then it immediately scans its 
> own block on disk (from the beginning up through the last acked length). If 
> it detects a corruption on its local copy, then it should assume that it is 
> the faulty one, rather than the downstream neighbor. If it detects no 
> corruption, then the faulty node is either the downstream mirror or the 
> network link between the two, and the current behavior is reasonable.
> {quote}
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10236) Erasure Coding: Rename replication-based names in BlockManager to more generic [part-3]

2016-05-26 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301614#comment-15301614
 ] 

Rakesh R commented on HDFS-10236:
-

Thanks [~zhz] for the review comments. Attached new patch addressing the same.

> Erasure Coding: Rename replication-based names in BlockManager to more 
> generic [part-3]
> ---
>
> Key: HDFS-10236
> URL: https://issues.apache.org/jira/browse/HDFS-10236
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10236-00.patch, HDFS-10236-01.patch, 
> HDFS-10236-02.patch
>
>
> The idea of this jira is to rename the following entity in BlockManager as,
> {{getExpectedReplicaNum}} to {{getExpectedRedundancyNum}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10236) Erasure Coding: Rename replication-based names in BlockManager to more generic [part-3]

2016-05-26 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10236:

Attachment: HDFS-10236-02.patch

> Erasure Coding: Rename replication-based names in BlockManager to more 
> generic [part-3]
> ---
>
> Key: HDFS-10236
> URL: https://issues.apache.org/jira/browse/HDFS-10236
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10236-00.patch, HDFS-10236-01.patch, 
> HDFS-10236-02.patch
>
>
> The idea of this jira is to rename the following entity in BlockManager as,
> {{getExpectedReplicaNum}} to {{getExpectedRedundancyNum}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org